CG reconstructions of historical buildings,
whether still standing or turned into ruins over time, either aimed at fidelity or being
a payed homage to wonders of the past, do stand as a significant challenge for 3D artists. In this context, super-imposed recostructions
are even more of a daunting experience: a time machine where past and present need to
co-exist together; at the same time, in the same space. Chances of success depends upon finding the
right subject and reference and will require familiarity with perspective and measurement
systems while making the most out of one's modelling and texturing skills all the way
up to the final compositing stage. But, how do we define what a good potential
subject is? For this, my eyes would naturally turn toward
my hometown, Rome, and hence to the Roman Forum. The perfect candidate would satisfy three
conditions: its remains would need to be standing in their original location and position as
much as possible and they would need to be part of a relevant portion of the original
building, since the vast majority of pictures and references are of course going to be framed
around that. Finally, the presence of reliable sources
would be essential to allow for a reasonably accurate reconstruction. In my quest, after evaluating different options,
I eventually turned to the southern part of the Forum, where three Corinthian columns
along with a piece of entablature are standing on top of a massive arcaded podium. These are the remains of the Temple of Castor
and Pollux and here is where I would finally set. Indeed, the site is pretty much clean from
alterations and the original structures are in a quite decent state of preservation. A roman temple named after greek Gods. The myth of its very own foundation traces
back to the 5th century BC and a war between the last king of Rome and the newborn Roman
republic where the twin gods are said to have appeared on the battlefield in aid of the
latter. As many other buildings of the past, it underwent
a few reconstructions. Rebuilt and enlarged in the year 117 BC was
restored again in 73 BC. Destroyed by a fire a few years later, was
finally rebuilt by Emperor Tiberius in the year 6 AD. And this last stage, known as that of the
Augustan temple, is what remains today and what this reconstruction would be focused
on. At its peak, the temple would have appeared
as a truly magnificent one: a 7 meters tall double podium with chambers, also called tabernae
or loculi, placed in between each column and supposedly used by the office for weights
and measures, which was hosted in this very same temple, to conduct its business. The access to the temple was possible through
lateral flights of stairs, still visible nowadays. On top of the podium, 38 corinthian columns
of the finest quality. All of them, surmounted by a richly decorated
entablature. For this superimposition, I relied on two
main sources: one is "The architectural antiquities of Rome" by George Ledwell Taylor which contains
descriptions and beatiful hand drawn plates about many of the buildings belonging to the
Roman tradition while the other is coming from the "Occasional papers of the nordic
institute in Rome" which is specific to the temple of Castor and Pollux and in particular
to its Augustan phase and contains hypothesis of reconstructions but most importantly on
site precise measures for the many different parts. Along with the right subject, a good base
image is of course needed as well. And again, three are the conditions I set:
the shot would need to have a decent resolution and to frame as much as possible of the whole
structure. More importantly, light conditions would need
to be as neutral as possible to potentially minimize pre and post processing editing. The one I would eventually choose, is this
pin by Rebecca Campanaro. And I don't really know if it's her shot or
not, but anyway that's the source I used. The original image is 3024 by 4032, which
is not that much but actually among the best resolutions I could find. Details appear to be quite washed out. But the thing that draw my attention to this
shot in particular, is the nice overcast weather that would make lighting a bit easier to manage
throughout the whole process. To analyze the base image's perspective and
scale I used Fspy, which I'm sure many of you already know of. You are going to need both the standalone
version and the Blender addon. In the first, you can simply load the reference
image and start looking for vanishing points. Now, this is not the usual simple room with
cleared defined lines so I knew I needed to work with some assumptions and a bit of a
tolerance. To define the vanishing point on the X axis
I used the base of the podium and the lower edge of the entablature as references, while
for the Y axis I leveraged on the fact that apparently the front of the temple was intended
to be aligned to the facade of the near Basilica Julia. For the Z axis whatever reference is ok as
long as it is perpendicular to the terrain. The position of the Gizmo within Fspy will
correspond to the origin point in Blender once the project is imported, and it is usually
good practice to place it at the floor level. However, since I needed to retrieve the real
world scale of the image, and that the only measure which can be assessed from the image
is that of the column (which I knew was aproximately around 14.8 meters tall), I had to place it
at the base of the columns, which means that I would then need to work my way to the ground
in a different way. Speaking of measurements, the unit adopted
is the original roman foot, where one foot is equal to 29.64 cm and 12 inches would amount
to that one foot. Based on this, I could for instance determine
the podium to be 23 feet tall, or 6.82 meters and the column to be exactly 50 feet tall,
or 14.82 meters. At this stage my aim was to validate the perspective
and measures coming from what I just did in fspy. Back into Blender the first step was to recreate
the plane upon which the columns are standing, and from there, and based on the height of
the podium, try to extrude down to meet the line of the ground, which apparently did not
happen. And that's due to the fact that from the base
of the podium up to the base of the column, the podium itself is actually sloped inward
by 4 feet and 6 inches, or 1.33 meters. Hence, I just needed to move the vertical
face representing the podium's facade on the Y axis right where the podium's actual base
is and, by measuring the distance between the podium's base projection on the Z axis
and the base of the columns I would get 1.3 meters which is 3 centimeters off its real
counterpart and hence reasonably fine. Once I knew where the ground was, I could
start measuring other things as well, like the width of the pilaster's base which is
1.74 meters in my .blend file and again off by 3 centimeters with respect to the one on
site. Then, I imported from the asset browser my
own Corinthian column acting for now just as a placeholder and I resized it according
ot its real world scale, and since I already knew its position on both the Z and Y axis
what I was left with was defining its correct placement on the X axis as well based on the
background image. To assess perspective, I then created a plane
and aligned it with the entablature at the top. The validation here would come from the presence
of a bit of space between the front of the abacus and the lower edge of the entablature. I duplicated the column and moved it on the
right again using the base image as a reference just to measure from the bottom the interaxial
distance between the two columns which would amount to 3.7 meters in the .blend file, again
confirming that I'm stuck at a 3 centimeters delta with respect to the real counterpart
which is indirectly telling me that the setup of my .blend file with regards to perspective
was apparently fine. Now, the accounts describe the temple as octastyle
(which means it has 8 columns front and rear) with eleven columns per each side which combined
with Taylor's assumption that the three standing columns are respectively the third, fourth
and fifth from north this would allow me to define the position of the very first and
last columns of the eastern side of the temple by using an array modifier on a plane with
the offset set at the interaxial distance value. Unfortunately, Taylor's assumption would later
prove to be actually wrong, but luckily enough this would still have no impacts on this validation
stage. With the complete set of columns represented
by this tiny square placeholders, I could finally assess the measures of the stylobate,
which goes from columns' plinth to columns' plinth and that would result in 28 meters
for the short edge and 39.1 meters for the long one, which means off by 20 cm in the
first case and by 50 cm in the second case. Now, given as fixed the position of the first
column and considered the 3 cm delta in the interaxial distance, for the long edge I would
have expected for 10 times the distance the delta to be actually around 30 cm and for
7 times the distance the delta to be aproximately around 20 cm for the short edge. Hence, the latter seems to be fine while there
seems to be a problem with the long one. Actually, they're both wrong. In fact, and this is something I have to admit
I didn't know, the distance between columns is not supposed to be fixed across the entire
temple. The columns at the corners, in fact, would
in general be thicker than the others, so for the intercolumnio to be homogeneous the
interaxial distance needs to be greater. Additionally, the columns at the front would
be closer together than those on the sides apart for the two in the very middle that
would be those more far apart in the entire temple as to provide a far more suggestive
entrance into the structure. As such, I started adjusting the positions
of the placeholders according to this finding and the resulting measures of the stylobate
would now be 28 meters for the short edge, which is just by chance the same as before,
and 39.3 meters for the long edge. So, as expected, respectively 20 and 30 cm
off the real counterpart. The work done in Fspy could thus be considered
as fairly correct, and I would now feel safe moving on to the actual modelling stage. Whenever I need to model something I always
ask myself what the best workflow would be for that specific occasion, and usually this
comes down to two different options: in the first case, my mesh would have a curvature
to be smoothed out while edges would need to be kept instead nice and sharp. The subdiv modifier would indeed take care
of the curvature but at the same time it would also make my edges way too loose. To make them sharp again I would then need
to add support loops, which non-destructively speaking means adding a bevel modifier on
top of the subdiv modifier. In the second case, there would be no curvature
to be smoothed out and I would actually have the opposite kind of problem, where my edges
would be razor sharp as opposed to anything you would see in the real world. The bevel modifier can help again, this time
making the edges a bit softer. But without the interpolated information coming
from additional subdivisions, the smooth shaded version of the mesh would simply be a complete
mess, and that's because the software, when smooth shading, would still take into account
all of the normals of the mesh, some of which, if not all, are respectively at very harsh
angles. To amend this, we would need to enable the
"harden normals" option under the shading palette of the bevel modifier and within the
object data properties tab we would need to enable "auto smooth" as well. In this way, Blender would leave the big flat
faces alone and it would interpolate, when shade smoothing the mesh, only among normals
comprised within a reasonable angle, where the maximum viable angle would be the one
specified by the user. In modelling the Temple, I tried to rely as
much as possible to this second kind of workflow as it would make the whole scene a lot more
efficient and easier to manage. The entire podium is in fact modelled along
these lines, as my intention here would be to have full manifold meshes as to replicate
the actual blocks a stonemason would have to produce in real life. A couple of tips. The first, technical: if you're using orthographic
references to model something (as it is the case here), consider splitting the view into
two, where in the first one you can always keep the orthographic view on whatever axis
you are working on, while in the second you are free to move around, inspect the mesh
and select whatever vertex, edge or face you may need. The second tip is to consider that these kind
of classical profiles are quite never the result of a freestyle exercise by the original
architect, but instead they are the result of the combination of different known and
widely used base shapes, which do also happen to have a name and to be easily recognisable. So, whenever you fall short of references,
you can always get your model still look reasonable enough by delving into this kind of profiles'
primitives. With a section of the podium ready, I would
then start looking into the Corinthian column on top of it. Starting from the version I already had, I
knew I had to re-model the base from scratch, as the one from the Temple of Castor and Pollux
is very specific; and to do that I used the reference coming from one of the plates in
Taylor's book. The second change was related to the central
part of the capital, where the typical volutes would in this case be replaced by interlaced
spirals. For this kind of shapes, I always start from
the logarithmic spiral under the curves' spirals submenu, and then play around with the number
of turns as well as the expansion force and the radius. Thickness can be added through the depth control
under the bevel sub-palette while to taper a specific point of the curve you can use
ALT-S in combination with proportional editing. Back into the main .blend file I used the
placeholders from the validation stage to instance all the columns at once. With a very simple geometry nodes' system,
I converted each face to a point to then instance the entire column collection at each point. I then proceeded to remove all of the placeholders
except for those on the perimeter and in the pronàos eventually using the "transform geometry"
node in between the collection and the "instance on points" node to perfect the location of
the columns. I would then create a separate collection
just for the columns' shafts where i would create a few variations starting from the
original one by adding edge loops at uneven distances. All of this, to simulate the columns being
built upon different blocks, or drums, of different dimensions and sizes which in real
life would serve the purpose of randomly distributing pivot points as to make the whole structure
more resilient in general and to potential natural events specifically such as earthquakes
and the likes. To make the different versions of the shaft
spawn randomly at each point, in the geonodes system I would add a new "instance on points"
node just for them to be then joined back with the original setup. For this system to work it is enough to tick
the "separate children" and "reset children" options on the "collection info" node as well
as the "pick instance" option in the second "instance on points" node. I also defined a random seed parameter to
drive how the shaft's instances are placed by adding a random value node set on "integer",
with a number of potential values equal to the number of shaft's variations, and eventually
feeding it into the "instance index" socket of the "instance on points" node. Finally, to control the seed of the random
value node I created a group input parameter to be used from outside the node system as
to browse among different potential configurations. It was now time to start assembling the podium
and to do that I began staggering copies of the different pieces unevenly as to add another
layer of randomness to the composition. This would make the texturing process way
more easier later on and the final outcome much more realistic. If you want to avoid doing that by hand, as
I painfully did, I would suggest researching into the "accumulate field" concept of the
geometry nodes ecosystem, but in the end it is also important to always remember that
not everything needs to be procedural after all. To model the corners, especially when using
complex profiles, the life-saving option will always be the shear tool, which can be activated
on the left of the UI. 90 degrees corners would correpond to an offset of precisely
1, or -1 depending on the orientation, and from there on it would be enough to extrude
along the new axis, straightening if needed the extremity once again by scaling down to
0 to that specific axis. For the lower podium, I started positioning
the pilasters' bases at first, leaving the gaps corresponding to where the chambers should
be. In general, I also started working on the
connection between what will be rendered and the base image and since the podium is sloped,
I knew I needed to model also a part of the inner wall for the future transition to look
right. The main idea, anyway, was to try and keep
the original remains as much as possible in the final composited image. To complete the podium, I modelled the tabernae
as well. The sources seem to suggest, based on the
holes found on the different pilasters' bases, the presence of a closing mechanism, either
a folding grate or a proper door. For my recostruction, I decided to go with
the latter. To model both the lateral and central flights
of stairs one common way would be to simply use an array modifier with an offset of 1
on either the X and Y axis and on the Z axis. While still somehow flexible, it can get quite
annoying when trying to manipulate either the number of steps or the overall size of
the flight all at once. As such, I built my own procedural and much
more user friendly system with just a box to control the size and positiong of the flight
and a parameter to define the number of steps comprised within that pre-defined space. It is quite easy to replicate: you just need
a 1x1x1 cube with the origin point at the front, botton left corner to which you can
apply whatever modifier you wish, representing the step; and the same cube, without modifiers
to be used as the controller. The idea here is simply to instance the steps
along the diagonal of the box. Within the geonodes editor we will use a curve
line where the start and end attributes need to be bind to the corresponding vertices of
the controller. To do that, we can sample among the attributes
of the controller the respective positions of the two vertices and feed them into the
vector sockets of the curve line node. In this way, however we modify the shape and
position of the controller, the curve line will simply follow. We need then to convert the curve to points
adding a subdivide curve node in between to control the number of steps instanced. We can then add an "instance on points" node
and bring our step within the node editor, connecting it with the instance socket of
the aforementioned node. To get the Z dimension of each step right,
we can again sample among the position of the vertices belonging to the controller which
lie perpendicularly on the Z axis, separate them into their three components and subtracting
the respective Z values. The result will need to be divided by the
number of steps, which in turn will be equal to the number of cuts plus 1. We can then create a new vector with unidimensional
sizes for the X and Y axis and with the Z value being fed by the operation we just did. We can repeat the same process to define the
steps' dimension also on the Y axis and everything is good and fine but if we join what we did
so far back with the controller we would notice that we have one step more than what we need. To get rid of that it is enough to store an
endpoint boolean value for the curve line with the "store named attribute" node and
to add at the end of the chain a "delete geometry" node set on "instance", where the selection
would be that very same named attribute. Lastly, we can also link the dimension of
the step on the X axis to that of the controller in the same way we did it for both the Y and
Z axis. At this point I realized something was wrong:
the distance between the last pilaster's base and the lateral flight of stairs was too wide. So, either the flight should have been wider,
which seems unlikely given that the original remains are still on site, or Taylor's assumption
that the three remaining columns were to be the third, fourth and fifth from north is
incorrect. Browsing over the plates reported in the second
source: "Occasional papers of the nordic institutes in Rome", I came across this representation
here, where nothing is really written down but by judging from the color choice for the
columns on the eastern side, it seems to suggest that the three remaining columns are actually
the fourth, fifth and sixth from north instead. And this, by the way, seems to be coherent
with the remains on site and would definitely solve the mismatch. Hence, I went for that and, even if that's
quite a big change at this stage, it is actually never too late to make things right. For the naos I modelled over one of the plates
coming from my sources, and since there are no remains of the original base profile, I
combined together some of the common shapes I mentioned when modelling the podium. Again, I staggered several copies of the shapes
unevenly, using the shear tool to take care of the corners. The entablature is composed as usual from
the architrave and its three fascie of increasing sizes, the frieze, which is apparently plain
in this case, and the cornice. Also in this case, I modelled the different
pieces separately, obtaining unsubdivided manifold meshes. Along with that, I focused as well on creating
the various ornaments lying on the entablature. For each mesh composing the entablature, I
moved the respective origin points to one of the rightmost vertices and the same I did
for whatever ornament would need to be placed on that piece; the idea here was to define
a sort of lowest common denominator where the size of the entablature's piece would
equal one full pattern of its respective decoration. To manage both at the same time I created
a geometry nodes system on the entablature with a parameter to control its size on the
X axis. To instance the decoration along the mesh,
I used a curve line which would need to run straight across the mesh, and this can be
achieved by binding its start and ending points to two consecutive vertices of the entablature's
piece, using again, as for the flight of stairs, a couple of "sample index" nodes. To get the right ID for the desired vertices,
one practical solution is simply to scroll through the available indices until the curve
line is perfectly straight on all of the axis. I would now convert the curve to points with
the "evaluated" option on and add an "instance on points" node to the chain. I could then import my ornament and connect
it to the chain as well. To remove the iteration in excess, I used
once again a "delete geometry" node set on "instance" into which I would feed the information
coming from the stored endpoint selection attribute. I then linked the size of the decoration to
that of the entablature's piece so that they would go along together and I added as well
a "subdivide curve" node right after the "curve line" node to get control over the number
of iterations I wanted to be displayed for that defined space. To make this work, I simply needed to divide
the scale of the ornament coming from the entablature by the number of cuts plus 1,
or, in other terms, by the number of the desired repetitions for the decoration. This is quite a comfortable way to manage
repeating patterns along a variable distance, and I started to repeat the same procedure
for all of the other pieces of the entablature. A tip to better conform a mesh to curved surfaces
is to run an edge loop on the base mesh, isolate the portion of interest and make it a separate
object, more specifically a curve, and to use a curve modifier on the deforming mesh
to get the right shape. An additional suggestion would be, whenever
you need to sample among the vertices of a mesh, to place the geonodes system right before
any modifier as to work with the least possible amount of information. During the process it occurred to me that
a general rule combining the size of the entablature and the respective number of iterations for
the ornaments could be found. In particular, the latter would ideally be
equal to the former minus 1. This means that the operation can be automated,
either in the geonodes system itself or by using drivers. As I chose the second path, I would copy the
data path of the length attribute and paste it into a newly created driver for the field
driving the number of deco iterations, specifying in the expression the desired relation. I did that for all of the pieces of the entablature
and once done I linked all of the length's parameters of the different components to
just that of the frieze, as to define a sort of source parameter, again using drivers. As such, by toggling just one single attribute,
I got a fully procedurally resizable entablature which came quite handy when working on the
final composition. Let me share one additional note for the the
tiles of the roof. For sure textures are a great way to save
on polycount, but it is questionable whether the normal map would be enough for the magnitude
of the shapes involved. Without the need to use displacement or similar
techniques one efficient way to model roof tiles would be to model one single tile and
again creating a geometry nodes system this time with a grid, converted to points, with
a point for each face, to then instance the tile with the "instance on points" node. By manipulating the size on the X and Y axis
and their respective number of vertices we can get the whole roof. Moreover, we can add a bit of randomness to
it by feeding a random value for the rotation of each single tile and we can do the same
also for the size. By being all instances, the triangles count
would still amount to that single tile we modelled at the beginning, which I think,
considered how tweakable this system is, results in a quite cost-effective solution. The last step in the modelling process was
to create a few ground planes to help with the transition between the render and the
base image. With the modelling stage finally completed,
it was time for a basic clay render of the temple to be then imported into an image compositing
software to produce what will be the three main components of the final output: the base
image, the render itself and a cutout of the base image containing whatever needs to be
in front of the render. As such, within Photoshop, with the pen tool
I started creating masks around the pieces of the temple that I wanted to be in front
of everything in the final composited image. At this stage, a rough result would be more
than enough, as I would refine everything later in the final compositing stage once
the full render of the temple would be available. Within Blender, I would add an additional
image slot to the current camera as to load the cutout png produced in Photoshop. I would also place it in front of the render
and with an opacity of 1. This would not only allow me to use the cutout
as a direct reference for texturing, but also for some minor modelling adjustments. Since the three columns from the cutout are
not, obviously, casting any shadow, I would isolate the three corresponding modelled columns
into a separate collection and from the outliner I would toggle the "indirect only" option
on. By enabling this option within the collection
containing the three columns, only their indirect impact on the scene, such as shadows and reflections,
would be eventually rendered out. The next step was to identify a proper HDRI
for the scene, and for that there is no easier way than browsing through the catalog available
with the poly haven addon. After a few tests, I decided that the "Kloofendal
overcast" HDRI was a good enough match with the base image. Moving on to actual texturing the first step
was to create a new material and to apply it to the entire model through the overrides
menu. A decent procedural marble material would
need to have the following ingredients: a fine grain to drive some basic color variation,
more compact patterns to be used for primary marble veins, an additional pattern for some
secondary marble veins, a grunge map to be used for generating the orangish stains you
would see in references and that are a consequence of the rusting process over time of the minerals
which are naturally contained within marble. Additionally, there would be flaking and disintegration
coming from changes in the microstructure of the material. Lastly, dirt, of course, and the fact that
marble is a translucent material which means that ideally a marble shader should always
have some subsurface scattering going on. All considered, I still contravened some of
these rules, as for instance I did not use any subsurface scattering since the final
render would be anyway quite blurry and washed out as to match the base image and due to
the fact that at this point in time the temple would already be quite deteriorated as well. The main ally in creating the mentioned patterns
is the noise texture node. To make it more interesting, a common trick
is to use another noise texture node, and specifically its color output, as vector data
in order to displace the original noise texture on a pixel per pixel basis and along different
axis all at once. The problem we can see right away is that
the texture's pattern is identically repeated among the multiple instances of the model,
but we can build a simple node system to amend that. With the "random output" of the object info
node and the "random per island" output of the geometry node it is in fact possible to
randomly offset a texture either among instances, or among different pieces within the same
mesh. To get full advantage of that, I would combine
the two with a simple logical operator and I would enhance the result with a "power node". I would then separate the three coordinates
of the texture and recombine them using again a logical operator in between where one of
the two variables would be the output of the simple randomization node system. I would finally proceed to apply the first
basic color variation by using the output of the noise texture as a mask and by sampling
colors from my references. To build more compact patterns which will
be used to reproduce marble veinings, I used a duplicated version of the node tree where
I reduced the roughness value of the noise texture used as displacement as to make the
final pattern less spread apart. With an additional color ramp, I would then
isolate some of the ridges of the pattern with the aim to use them as an additional
layer of color variation. When doing these kind of manipulations, I
usually like to start from very bright colors just to see clearly where the color variation
would be eventually applied and I would then switch to the intended hue only at the end. Additional vein patterns can be created very
simply with a voronoi texture set to "distance to edge" and with a color ramp with the white
pin moved all the way toward the blacks. Also in this case, the trick is to use the
color info of a noise texture as to displace the voronoi, hence producing a quite convincing
veining pattern. The suggestion is to avoid over exagerating
the effect though, as it could easily turn your material into a cheap looking one. To give the idea of separation and to make
the different pieces of the model stand out more, I multiplied on top of the result obtained
so far the output of the "ambient occlusion" node. It was then time to introduce some flaking
effect, for which I used a bump node fed again by another variation of the noise system built
at the beginning. I then went back to my model to adjust some
bevel sizes a bit as to give an even greater idea of separation among shapes, and I applied
a grunge texture as well in order to account for the orangish discolorations and stains
we mentioned before. At this point, I added a couple of nodes to
mimic the tint of the base image, even though I wasn't really trying to color match the
render and the base image here as I would do that in the post-processing stage. I would finally use another grunge map to
be used for dirt and general muddiness. An additional tip is to multiply or overlay
the output of the coordinates randomization node system on top of everything as to give
each distinct piece of the model a slightly different tint. It was now time to create the different proper
materials for the different meshes that would need specific treatment, such as the wall
of the cella, for which I imported the normal, color variation and ambient occlusion maps
coming from a tiled material I already had among my Substance Designer projects. Likewise, I created the specific materials
to be used for the roof tiles and for the doors of the tabernae. To manage the transition between the ground
planes and the base image, I built yet again a simple geonodes system. From the Botaniq addon, I imported a grass
sample and I placed into its own collection. Then, on the ground plane, I added a "distribute
points on face" node and a subsequent "instance on points" node. I brought my grass collection in and connected
it to the latter, again with the "separate" and "reset children" options ticked and with
the "pick instances" option enabled as well just in case I wanted a bit more variation
by spawning different grass samples onto the plane. It was then a matter of playing with the density
and scale of the instances, as well as with the positioning of the plane and of course
with the hue of the material applied to the grass samples. It was finally time for a full render. Here, a low number of samples with denoise
turned on would do the trick of washing out a bit the final output. For the final composited image I would import
the final render within Photoshop, and the reason why I did not do the compositing directly
within Blender is that I wanted to have full control over the masks used for the cutout
and for the different adjustment layers applied. So, my general suggestion would be to always
use, when available, the best tool for the task. Right after some basic adjustments with regards
to the brightness, contrast and saturation of the render I proceeded to work on the transitions
between the latter, the base image and the cutout by manipulating the masks for both
the cutout and of course the ground. To help integrating the render within the
background I used as well some basic dodge and burn techniques, especially trying to
match the brightness and tone of those portions of the render which are at direct contact
with the ground. In the end, this would be my before and after. One last tip: if you need a wireframe render
of your model you can the do the following – just enable the "wireframe" option in
the overlays menu and switch to the viewport shading mode. Disable whatever images you have applied to
your camera and under "view" select "viewport render image" and you are good to go. And that would be it, I hope you discovered
something interesting for your future projects. In the meantime, thank you for watching and
see you next time.