King Kong (1933)
King Kong is a 1933 American pre-Code adventure fantasy horror monster film[4] directed and produced by Merian C. Cooper and Ernest B. Schoedsack.
The use of music in the 1933 King Kong film was very unique for the time, as it was made early when music and sound were starting to be synced together. The soundtrack and synching was very impressive for the time; it contained music that was specifically for the film.
When live-action actors had to interact with a stop motion Kong, a special method known as the Dunning process was used to combine two pieces of film together at the same time. Alternately, rear projection was employed, with a screen built into a miniature set, onto which live-action footage was projected frame by frame to match the animation.
Footage of puppets—no bigger than 2 ft. tall and constructed from ball-and-socket joints and rabbit fur—was combined with live-action shots using techniques like double exposure and matte painting, with one minute of film taking as long as 150 hours to produce.
Here’s a look at that history of Kong effects, which includes everything from stop motion animated models, large scale animatronics, rear projection, motion capture, and the latest in digital wizardry. The 1933 King Kong is a classic of stop motion, regarded by many as a pioneering achievement of the art form.
INTERESTING VFX WORK
Dr Strange Multiverse Of Madness(2022)
GOOSEBUMPERS(2015)
LUSIFER (WEB SERIES)
Game Fo Thrones (WEB SERIES)
1. Prince, S. (2010) 'Through the Looking Glass: Philosophical Toys and Digital Visual Effects.
Projections, 4(2), pp. 19-40
Filmmakers now have more creative options at their disposal because of digital visual effects, according to Stephen Prince. Physical correctness in the depiction of the behavior of natural elements, including motion and sound, can be used to create an unreal world. He draws on the work of scientists and natural philosophers to highlight the significance of the amount of light required to create the ideal illusion. His talk focuses on how 3D modeling depends on the ability to perceive depth and space to make an illusion appear real. By recalling William Horner, a British mathematician who created the illusion by placing drawings on a horizontal wheel rather than a vertical one, he shows the importance of allowing multiple viewers to see the illusion simultaneously. He also discussed the operation of the camera and the roles of the mirrors and lenses, by adding that the Avatar movie's creators decided to use a technique known as a spectral correction to ignore blue and orange light.
Annotated Bibliography
Spical Effects In Movies
Special effects in movies are mechanical or artificial visual effects that contribute heavily to the visual experience of the audience.
SUPERMAN RETURNS(2006)
There are two types of special effects in movie making mechanical and optical. Mechanical effects, also known as practical effects, are real-world special effects created physically on a set, including prosthetic makeup effects, such as the Oscar-winning make-up effects that John Chambers created for the 1968 film Planet of the Apes.
Planet of the Apes(1968)
Special props covers a wide variety of things, but no matter what you want, whether it be a lightweight reproduction or the cloning of a one of a kind piece, our mould design team can produce any prop to meet your needs and specifications.
Fake Gunshot
Thor Love And Thunder(2022)
Invisible Effects
2.Prince, S. (2012). Digital visual effects in cinema:
The seduction of reality - Introduction
The author addressed the issue that when people heard about special effects, they immediately thought of films like Iron Man, Lord of the Rings, and Avatar. However, special effects are increasingly often used. Visual effects are not developed as a set piece that is photographed or as live action. They play a major role in how film works as a storytelling medium; they are not a minor aspect. The author breaks up his explanation of how movie special effects operate into different sections. The first, and most important, one is on the background, where he talks about altering the background using computer graphics and moving pictures and a suitable and realistic background can be produced by using green and blue screens. In the second section, he talked about how digital lighting and color techniques have altered both the way films are produced and the function of the cinematographer. During a multiple render, one can manipulate visual components and changes to light and color with the use of special effects. The essential factor in making an illusion believable is lighting. Another significant topic he covered was the analysis of various digital performance elements, including sets, locales, and landscapes.
King Kong 1933 vs King Kong 2005
-
What do you think are the main changes between Kong 1933 and Kong 2005?
-
What differences do you notice between the physical effects and the digital ones?
KING KONG 1933
KING KONG 1933 was the first special effects film. Special effects like real projection, matte painting, stop motion animation, and miniatures were used. These effects are still used in today's films.They used models to create stop motion. Kong himself was made up of two identical 18-inch models and four different independent modes. For the New York scenes, the largest 24-inch model was created, and the smaller model was used for the scene where the Empire State Building falls. To make it appear realistic, the team also created two reconstructions of Kong's right hand and forearm using bear skins. A foot-diameter eyeball was also produced. Every frame they captured of King Kong's movements is visible.
KING KONG 2005
KING KONG 2005 is 100% CG and the green screen was used. Movements of king Kong were coming from Andy Circus in a motion capture suit. Digital effects look more real and to create it they study the gorillas at the London Zoo and even went to Rwanda to study them in the wild. For the facial performance capture, Andy had to have 134 small markers painted on his face to help capture every expression. After 70 years Kong gets a complete facelift, a Digitial one.
3.The Cinema of Attraction[s]: Early Film, Its Spectator and the Avant-Garde
Tom Gunning
Tom gunning wrote cinema of attraction in response to what he saw as an emphasis on narrative film in film criticism. Narrative film is a film that tells a story and most of the films fall in that category, its main mode of filmmaking. And the film how doesn't have story, this type of film is called as avant-garde. So, he says that because of narrative film when film historians look at these films, they often see this progression from beginning towards narrative cinema because there is always the end point of film. He also says if observation is made at the early years of cinema one can see an important impulse, the impulse towards spectacle or exhibitionism. He wants us to remember that cinema is a visual medium and it attracts people visually. He explains how this early film do this, by giving example of 'Melies trip to the moon' where it shows images of space tale and that's attracts people because no one has seen moon before and that's exciting idea. But the Lumiere brothers' film of people leaving the factory might seemed equally interesting because that's attracted an audience in terms of its reality. he says this are the main TWO type of cinema. Being produced in the first year of cinema they were films that trying to attract people by giving visual pleasure and amazed them even if they didn't have a story to tell. These two impulses often go together, and the best example is trip to the moon, there's a storyline and visual effects. This is how cinema of attractions works from the viewpoint of tom gunning.
Ray Harryhausen
“This is not just special effects. This is art”
Ray Harryhausen, the pioneering visual effects artist perhaps best known for his work in 1963’s “Jason and the Argonauts,” died today in London. He was 92. He was an American-British animator and special effects creator who created a form of stop-motion model animation known as "Dynamation". His innovative and inspiring films, from the 1950s onwards, changed the face of modern moviemaking forever. This largest and widest-ranging exhibition of Ray Harryhausen’s work ever seen contained newly restored and previously unseen material from his incredible archive.
Ray Harryhausen worked near single-handedly to make his most fantastic creations. He learned photography to shoot his stop motions film. Harryhausen use to draw his scenes and then he makes models with Metal and Ruber, which can move every part of the character and every detail that has been put in it. He used to work in his garage. Later he made lots of characters for many movies. It was in The Beast From 20,000 Fathoms that Harryhausen first used a technique he created called "Dynamation" which split the background and foreground of pre-shot live-action footage into two separate images into which he would animate a model or models, seemingly integrating the live-action with the models.
Harryhausen developed a special method for combining his stop-motion with live action. He used simple setups, positioning a single-frame projector behind the animation table, a translucent back projection screen in the middle, and a glass for painting out foreground pictures on the rear-projected screen in front of the camera. The crab scene from the 1961 film "Mysterious Island" was different from Ray Harryhausen's regular work; for this scene, he used a real crab and inserted of make model form rubber, he fixes a mechanism in actual crab skin so that every movement of the crab was accurate and the reason it seems so realistic.
Stop Motions Animation Cartoon
Shaun the sheep
Shaun the Sheep is a British stop-motion television series and a spin-off of the Wallace and Gromit franchise. The title character is Shaun, previously featured as the sheep named "Shaun" in the 1995 short film A Close Shave and the Shopper 13 short film from the 2002 Wallace and Gromit's Cracking Contraptions series. The series focuses on his adventures on a northern English farm as the leader of his flock.
Bob the builder
Bob the Builder is a British animated children's television series created by Keith Chapman for HIT Entertainment and Hot Animation. The series follows the adventures of Bob, a building contractor, specialising in masonry, along with his colleague Wendy, various neighbours, and friends, and equipment, and their gang of anthropomorphised work-vehicles, Scoop, Muck, Dizzy, Roley, Lofty and many others. The show is broadcast in many countries but originated from the United Kingdom where Bob was voiced by English actor Neil Morrissey. The series originally used stop-motion from 1999 to 2009, but later used CGI animation starting with the spin-off series Ready, Steady, Build!. British proprietors of Bob the Builder and Thomas & Friends sold the enterprise in 2011 to US toy-maker Mattel for $680 million.
Noddy
Noddy is an English character created by English children's author Enid Blyton. Noddy first appeared in a book series published between 1949 and 1963, illustrated by the Dutch artist Harmsen van der Beek from 1949 until his death in 1953, after which the work was continued by Peter Wienk. Television shows based on the character have run on British television since 1967 Noddy has a children called Sarah Kyle and Danielle and a rabbit and hamster called blue and Oreo.
4. "What is Digital Cinema?"
- Lev Manovich
In this article, Lev describes the enormous change in Hollywood movies' use of computer graphics and visual effects in recent years. Only Hollywood studios used digital technology since it was highly expensive to buy the hardware and software. The entire film industry has, however, been impacted by the shift to digital media. Traditional filmmaking is being replaced by digital technologies. He defines digital film in this way, "digital film = live action material + painting + image processing +compositing + 2-D computer animation + 3-D computer animation".
Lev has emphasized the differences between traditional and digital cinema through his digital filmmaking methods, contending that there is no need to film actual scenes because everything can be produced using computer graphics and animation. The resulting films benefit from live-captured imagery. A great example of how special digital cinema has become is the feather sequence from the movie Forest Gump, which shows how it can make digitized footage without losing its qualities and give a perfect impression of the object. A feather was photographed in various settings against a blue background to capture its real moments. He further clarifies that special effects are distinct from editing because editing is merely the act of combining sequences of images. Any image altering used to be the responsibility of specialists in special effects. In the recent day, computers have reduced that difference. With the use of paint software, editing or changing a single image has now become as easy as before was organizing a collection of photographs.
Teenage Mutant Ninja Turtles Movie 1990
In 1990, they shot with costumes and some manually possible special effects.
Teenage Mutant Ninja Turtles Movie 2016
In 2016, they shot movies using CGI and green screens, motion capture suits, and markers on faces to capture facial expressions.
5. GEORGE LUCAS DISCOVERS COMPUTER GRAPHICS
By Alvy Ray Smith
Alvy Ray Smith talks about his experience working as the director of computer graphics at Lucasfilm in this article. In 1981, Ed Catmull was in director of the computer division, and their shared objective was to produce a complete computer-animated movie. And he claims that Lucas hired them, gave them permission to utilize computer graphics in his movies, and otherwise supported them in fulfilling their dreams. and they specifically asked to develop three pieces of technology in particular: Pixar, a digital film printer, a digital video editor, and a digital audio synthesizer. Since they made investments based only on gut rather than a full understanding of the possibilities of the area, Smith referred to people like George Lucas and Steve Jobs as "accidental visionaries."later Smith goes on to discuss the creation of a 3D sequence for Star Trek II: The Wrath of Khan. In order to help the director and ILM's designers decide what could and could not be done with computer graphics, he proposed the idea of allowing him a night to think things through and develop a clear vision. He had the chance to create the first 60 seconds of the movie.Alvy's proposal was accepted, and he was given his first post as a director right away. At the end of the essay, Alvy adds that they succeeded in 1995, 20 years after setting out to make an entirely computer-generated movie.
Pixar Founders
Alvy Ray Smith
Co-founder of Lucasfilm's Computer Division and Pixar
Alvy Ray Smith III (born September 8, 1943) is an American computer scientist who co-founded Lucasfilm's Computer Division and Pixar, participating in the 1980s and 1990s expansion of computer animation into feature film.at Xerox PARC in 1974, Smith worked with Richard Shoup on Super Paint, one of the first computer raster graphics editor, or 'paint', programs. Smith's major contribution to this
software was the creation of the HSV color space,also
known as HSB. He created his first computer animations on the SuperPaint system.He created his first computer animations on the SuperPaint system. In 1975, Smith joined the new Computer Graphics Laboratory at New York Institute of Technology (NYIT), where he was giventhe job title "Information Quanta". There, workingalongside a traditional cel animation studio, he met Ed Catmull and several core personnel of Pixar.
Loren Carpenter
He was a co-founder and chief scientist of Pixar Animation Studios. He is the co-inventor of the Reyes rendering algorithm and is one of the authors of the PhotoRealistic RenderMan software which implements Reyes and renders all of Pixar's movies. Following Disney's acquisition of Pixar, Carpenter became a Senior Research Scientist at Disney Research. He retired in early 2014.Some of his work concerned using computer technology to improve Boeing's mechanical design processes, which were still entirely done by hand on paper.
John Lasseter
Head of animation at Skydance Animation
John Alan Lasseter born January 12, 1957, is an American film director, producer, screenwriter, animator, voice actor, and the head of animation at SkyDance Animation. He was previously the chief creative officer of Pixar Animation Studios, Walt Disney Animation Studios, and Disney toon Studios, as well as the Principal Creative Advisor for Walt Disney Imagineering. Lasseter began his career as an animator with The Walt Disney Company. After being fired from Disney for promoting computer animation, he joined Lucasfilm, where he worked on then-groundbreaking use of CGI animation. The Graphics
Computer scientist & co-founder of Pixar
Group of the Computer Division of
Lucasfilm was sold to Steve Jobs and became Pixar in 1986. Lasseter oversaw all of Pixar's films and associated projects as executive producer. In addition, he directed Toy Story (1995), A Bug's Life (1998), Toy Story 2 (1999), Cars (2006), and Cars 2 (2011). From 2006 to 2018, Lasseter also oversaw all of Walt Disney Animation Studios' films and associated projects as executive producer.
Edwin Catmull
Edwin Catmull was born on March 31, 1945, in Parkersburg, West Virginia. Early in his life, Catmull found inspiration in Disney movies, including Peter Pan and Pinocchio, and wanted to be an animator; however, after finishing high school, he had no idea how to get there as there were no animation schools around that time. Because he also liked math and physics, he chose a scientific career instead. He also made animation using flipbooks. Catmull graduated in 1969, with a B.S. in physics and computer science from the University of Utah. Initially interested in designing programming languages, Catmull encountered Ivan Sutherland, who had designed the computer drawing program Sketchpad, and changed[vague] his interest to digital imaging. As a student of Sutherland, he was part of the university's ARPA program, sharing classes with James H. Clark, John Warnock and Alan Kay.
From that point, his main goal and ambition were to make digitally realistic films. During his time at the university, he made two new fundamental computer-graphics discoveries: texture mapping and bicubic patches; and invented algorithms for spatial anti-aliasing and refining subdivision surfaces. Catmull says the idea for subdivision surfaces came from mathematical structures in his mind when he applied B-splines to non-four sided objects. He also independently discovered Z-buffering, which had been described eight months before by Wolfgang Straßer in his PhD thesis. In 1972, Catmull made his earliest contribution to the film industry: a one-minute animated version of his left hand, titled A Computer Animated Hand, created with Fred Parke at the University of Utah. This short sequence was eventually picked up by a Hollywood producer and incorporated in the 1976 film Futureworld, which was the first film to use 3D computer graphics and a science-fiction sequel to the 1973 film Westworld, itself being the first to use a pixelated image generated by a computer. A Computer Animated Hand was selected for preservation in the National Film Registry of the Library of Congress in December 2011.
6. The Uncanny Valley
By Masahiro Mori
Masahiro Mori, who was a robotics professor at the Tokyo Institute of Technology at the time, wrote an essay more than 40 years ago about how he imagined people might react to robots that seemed and behaved almost exactly like humans. In particular, he postulated that as a robot's appearance approached, but fell short of, being lifelike, a person's reaction to it would quickly change from empathy to repulsion. The uncanny valley refers to this slide into unease. But more lately, the idea of the uncanny valley has quickly gained popularity in both popular culture and technical fields like robotics. While others have looked into its biological and social underpinnings, some scholars have examined its implications for computer-graphics animation and human-robot interaction.
Examples can be found in robotics, 3D computer animations and lifelike dolls. With the increasing prevalence of virtual reality, augmented reality, and photorealistic computer animation, the "valley" has been cited in reaction to the verisimilitude of the creation as it approaches indistinguishability from reality. The uncanny valley hypothesis predicts that an entity appearing almost human will risk eliciting cold, eerie feelings in viewers.
7. Digital Storytelling: The Narrative Power of Visual Effects in film (2008) - Shilo McClean
Chapter 7- It goes likethis : The relationship between digiteal visual effects and genre
From an academic perspective, one of the most developed areas of film studies is genre theory. For this discussion, VFx is a useful medium to define the genre. Genre refers to "types of films" and pertains to characteristics that are often specified by the industry. Neales argues that genre is a crucial component of any movie's narrative image. When examining visual effects and story craft, it's important to challenge. The widely accepted belief that visual effects are mostly used in science fiction, action-adventure, fantasy, and horror movies. The use of vfx by science fiction filmmakers and academic study of the genre's traditions have both been active. For example, series editor Annette Kuhn says in her introduction to the first essay collection in the Alien Zone essay collections that "science fiction cinema distinguishes itself by its appeal to special effects technology in creating the appearance of worlds that either does not exist or cannot for one reason or another be recorded, as it were life." She points out that, in keeping with classical storytelling traditions, science fiction uses effects to ground the narrative reality, but she also claims that special effects in science fiction movies always draw attention to themselves.
Disney movies examples
Animated Movies vs Live Action Movies
TARZAN
1999 (Animated)
Making Of Tarzan 1999 (Animated)
They created mood board using solid drawings before beginning the animation process. every principle of animations are been observed.
TARZAN
The Legend Of
2016 (Live Action)
Breakdown of The Legend Of Tarzan 2016 (Live Action)
ALADDIN
1992 (Animated)
Making of Aladdin 1992 (Animated)
2019 (Live Action)
ALADDIN
Making of Aladdin 1992 (Live Action)
The Principles of Animation
1. Squash And Stretch
The principle is based on the observation that only stiff objects remain inert during motion, while objects that are not stiff, although retaining overall volume, tend to change shape to an extent that depends on the inertia and elasticity of the different parts of the moving object. To illustrate the principle, a half-filled flour sack dropped on the floor, or stretched out by its corners, was shown to be retaining its overall volume as determined by the object's Poisson ratio.
2. Anticipation
The Disney Studio's animators soon noticed that without a "planned sequence of events" leading the eye, audiences could not easily follow the animation. As a result, the animators would include a design known as an anticipation drawing to help the viewers get ready for an action by letting them guess what will happen next. Additionally, this would make the action seem more realistic.
3. Staging
Staging is the process of drawing the viewer’s attention to a particular scene. In order to draw the viewer’s attention, animators use a variety of poses and actions of the characters, as well as their positioning in the frame, the background, and other aspects of the scene.
In this method, animators can convey to the audience the atmosphere, reactions, and feelings of characters in a particular story by using animation techniques. In addition, staging can assist in informing an audience about the narrative.
4. Straight Ahead And Pose To Pose
In straight ahead a series of pictures, each one drawn by the animator in turn to produce the movement, can be seen. This may be a novel and imaginative method to work.
Pose to pose is a term used in animation, for creating key poses for characters and then inbetweening them in intermediate frames to make the character appear to move from one pose to the next. Pose-to-pose is used in traditional animation as well as computer-based 3D animation. The opposite concept is straight ahead animation, where the poses of a scene are not planned, which results in more loose and free animation, though with less control over the animation's timing.
5. Follow Through and Overlap
Not everything associated with a character tends to stop when they do in a moment. A few things continue to move for a little period of time after the character stops. Various body parts will move at different times, according to overlapping activity.
6. Slow In and Slow Out
Before and after each position, in-between frames were made rather than moving constantly from point A to point B. This gave the animation's motions a more realistic and natural feel.
7. Arcs
Most natural action tends to follow an arched trajectory, and animation should adhere to this principle by following implied "arcs" for greater realism. This technique can be applied to a moving limb by rotating a joint, or a thrown object moving along a parabolic trajectory. The exception is mechanical movement, which typically moves in straight lines. As an object's speed or momentum increases, arcs tend to flatten out in moving ahead and broaden in turns. In baseball, a fastball would tend to move in a straighter line than other pitches; while a figure skater moving at top speed would be unable to turn as sharply as a slower skater, and would need to cover more ground to complete the turn.
8. Secondary Action
A scene comes to life and can support the main action by including supporting secondary activities. When walking, a person can simultaneously speak or whistle, swing their arms or keep them in their pockets, and display their emotions through their facial expressions. The crucial aspect of supporting activities is that they draw attention to the primary action rather of detracting from it.
9. Timing
Timing describes how many drawings or frames there are for a specific movement, which relates to how quickly the motion moves on screen. Correct timing causes objects to appear to obey the laws of physics on a purely physical level. For example, an object's weight affects how it responds to a push, as a lightweight object would respond more quickly than a heavy one. Establishing a character's mood, emotion, and response depends on timing. Additionally, it can be used as a tool to convey certain facets of a character's personality.
10. Exaggeration
11. Solid Drawing
The idea of solid drawing is to give forms weight and volume so they can be considered in three dimensions. A competent artist, an animator must be familiar with the fundamentals of three-dimensional shapes, anatomy, weight, balance, light and shadow, etc. This required the traditional animator to take art classes and create life-like sketches. Johnston and Thomas specifically advised against the creation of "twins," or characters whose left and right sides were mirror images of one another and appeared lifeless.
12. Appeal
Cartoon characters' appeal is equivalent to what an actor's charisma might be. Appealing characters don't have to be sympathetic; they might also be monsters or villains. It's crucial that the spectator believes the character to be intriguing and authentic. There are many techniques for improving a character's audience connection; for likeable characters, a symmetrical or especially baby-like face tends to work well. In the composition of the stance or character design, a convoluted or difficult to read face will not have appeal or "captivation."
8 What is a Genre?
One can be feeling what it's like to recognize the genre when, while watching a movie, something shifts and one can sense what might happen next. The features that appear repeatedly in related films, books, television series, music, and other media are known as genres. Everyone is generally familiar with the literary genres of fiction, non-fiction, and poetry and this is a big category. For example, in poetry, each line of a poem must rhyme or fit in some other way with the other lines. But genre gets interesting when its found even in smaller categories like action movies.
For example, superhero movies generally involve an evil villain who is about to do something terrible before a superhero tries to stop him. There are usually smaller fight scenes throughout the movie and big fight scenes at the ending in which the superhero or group of superheroes attempt to win the battle. Since everyone is aware that superhero movies follow this format, this information contains no spoilers. This pattern is a genre, as are other repeating elements.
Composite Breakdown
300
PRESENTATION
Assignment 2
Since its beginnings in the entertainment industry, visual effects have greatly evolved. From the early usage of special effects in silent movies to the most recent technological developments, VFX has developed into a powerful tool for improving storytelling. An audio-visual production's narrative can be enhanced by the usage of VFX by adding realism and creative components. Bringing fantastical worlds to life is only one of the many ways that visual effects can improve storytelling. VFX, for instance, can assist in creating realistic and strange environments in science fiction movies that attract viewers into the narrative. This allows the filmmakers to take the audience to locations that would be otherwise impractical to see and contributes to the surprise and excitement felt by the viewers. Adding drama and excitement to situations is another way that VFX may improve storytelling. VFX can be used to help generate high-stakes scenes in action movies that keep the viewers on the edge of their seats. By using VFX to create believable and imaginative elements, filmmakers can bring a sense of realism and credibility to their narratives. This helps to bring the story to life and enables viewers to fully immerse themselves in the world being created. Colour grading, composition, digital environment design, and sound effects all play important roles in enhancing storytelling through VFX. These are the VFX pros, and good VFX motions can be made by balancing them. However, computer-generated effects are frequently made responsible for poor Hollywood productions due to the delicate VFX components. If one of these components is out of balance, the outcome will not be as expected. These elements of visual effects play crucial roles in bringing a story to life and adding depth to the visual world of a film. Colour grading sets the tone and mood, while good composition helps to guide the viewer's eye and create a clear visual hierarchy. Digital environment design helps to create believable and immersive worlds, while sound effects add an extra layer of realism and impact. Together, these elements work to create an immersive and engaging visual experience for the audience. Let's examine this factor in more detail.
Stephenne Alfaro (2021) wrote on her blog that colour grading is an important stage in producing blockbuster movies, short films, commercials, and television programmes. To help in narrating the story, it creates the mood and atmosphere of the situation. Every film or video undergoes post-production, which includes colour grading and colour correction. Colour correcting comes first before proceeding to colour grade. Your main objective while colour grading film is to enhance and create an emotion by adjusting the colours. You need to colour correct first in order to achieve it. Because when you colour correct, you will balance each frame's white balance and neutralise the blacks and whites. You will be adjusting your footage's brightness, contrast, highlights, and shadows. The reason for this is that you must prepare a smooth, blank canvas for your work. If colour correction is skipped, there is a good probability that the colours will appear unbalanced or oversaturated. Less realism will result in an amateurish appearance. When entering colour grading, a professional colourist always makes sure to restore and repair the raw photos in order to achieve brighter and more natural colours. The actual modification of colours to produce realistic colours in your film is known as colour grading. In order to enhance the emotional storyline of your footage, whether it is on a warmer or colder tone, you are setting the atmosphere of your movie. You can change the colours to create tones that are realistic and identical to how the human eye perceives colours by using colour grading. Your finished version would appear more finely created if you did it that way. Each scene in a video has a variety of different shots or frames. The majority of the time, each raw film has different lighting and saturation. There are situations when raw film appears okay, but most of the time, they need to be matched to give each footage a consistent appearance. It appears more polished and smoother in this manner. However, bad colour grading can also irritate viewers and have a negative impact on how well they judge the quality of the movie, especially when it comes to the visual effects. A lack of proper colour balance can make colours look strange and disturb with the viewer's ability to ignore reality. When the colour of the VFX components and the live-action film changes, it can be more obvious that the effects are artificial. Colour balance must be carefully taken into account during both the grading process and the VFX combination in order to achieve the optimum results.
The technique of placing objects in a frame is called composition. People do find certain alignments and shapes to be attractive. But a story must also be told through the film's composition. From Kyle Cassidy's perspective (2019) your storytelling may be improved by the positioning of your actors and objects within a frame. You achieve this by giving some objects more emphasis while giving others less. Filmmaking depends heavily on composition. According to Kyle Cassidy, there are some laws in composition. The fundamental composing rule is the rule of thirds. Your objects of interest should be located at the junction of two of these lines, which will divide the screen into thirds like a game of tic-tac-toe. Opposite to real life, you can freely chop off someone's head in film and photography compositions. The chins of people do not include this. Chins need to stay in the frame unless you crop an equal portion of the top of the head. The eyes are what matter; we naturally scan other people's eyes and pay attention to what they're looking at. We hold the idea that the eyes can provide access to the person who is inside a body. This is common to everybody. If the eyes are in focus in your photographs, a lot of other elements can be out of focus. Balance and symmetry Imagine your frame as a shadow box that you are filling with objects and that stands on a fulcrum in the middle. Normally, a balanced frame creates a sense of harmony, whereas an unbalanced one creates a sense of tension. According to the rule of thirds, totally symmetrical framing with the subject in the middle can also be very powerful. It can be quite effective when handled correctly, yet some filmmakers are slaves to symmetry in their composition. Balance doesn’t have to be symmetrical, though. It is possible to balance items on one side of the screen with items on the opposite side that are not mirror images of the first. Leading lines are generally fake lines that connect one thing to another in order to focus our attention on a particular object. Depth of field, the portion of a photograph that is in focus, allows the camera to highlight certain elements and downplay others. A short depth of field, where only a few objects are in focus to balance the frame, or a deep depth of field, when everything is in focus. If these components are out of balance, the composition will not look realistic and may receive negative feedback from the audience.
The method of designing and building virtual worlds or environments using digital tools and technology is referred to as "digital environment creation." This plays a crucial role in a variety of media, including live-action movies, animation movies, and video games. In order to construct a digital environment, computer modelling and graphics programmes are used, along with tools for lighting, texturing, and animation, to create the scene's visual components. The ability to design environments that would be challenging or impossible to build in the real world, as well as the ability to control and modify the environment in ways that would not be possible in a real-world situation, are all advantages of constructing environments digitally, as Stephen Prince highlighted in his book (2012, pp. 145–152).As a result, game designers and filmmakers may create incredibly realistic and immersive worlds that can enthral and enchant audiences. Digital environment creation can be done in a variety of ways, including the use of 3D modelling, computer-generated imagery, and real-time 3D engines. These methods and tools enable designers and artists to produce intricate and convincing settings that can be applied to a range of mediums. Sound effects are an effective technique to improve the movie and have an impact on the viewing experience, similar to digital environment construction. Together with music, sound effects help to establish a sense of reality and set the mood. In this manner, viewers can become fully involved in the film and benefit the most from the experience. The film's world is developed with the aid of sound effects. According to this article (How Do Sound Effects Enhance a Film?, 2022), when it comes to sound effects, there will always be some degree of exaggeration. When it comes to sound creation, there are many different kinds of sound effects. Four distinct categories of sound effects may be recognized in movies. The first are hard sound effects, which include sounds like door alarms, gunshots, and passing cars. T hen comes background sound effects, which help the audience understand the environment. Examples include automobile interiors and forest sounds. Next is Foley Sound Effects, which are noises created during post-production in sync with the movie to replicate commonplace sound effects. And Design Sound Effects, which are noises that do not typically appear in nature. The sounds used in science fiction movies with futuristic technology are one example of this. The entire quality and realism of a movie can suffer if the sound effects and digital environment development are not done correctly. This may cause viewers to feel cut off from the action onscreen, which makes it challenging for them to fully engage themselves in the narrative. Important plot points or character motivations may not be explained properly, which can also lead to confusion. Poor sound design and digital environment creation can, in some situations, completely damage the viewing experience, losing viewers' attention and harming the movie's reception and box office results.
In conclusion, Hollywood movies are frequently held responsible for poor visual effects films since they are so integral to a film's narrative and overall impact. The viewer may become disengaged from the narrative and have a negative experience with the movie if the visual effects are not believable. Furthermore, Hollywood blockbusters' high expectations for visual effects may make VFX flaws more obvious. In addition, the widespread usage of CGI in Hollywood films has increased demand for seamless and convincing visual effects, which can be challenging to deliver. In audio-visual productions, visual effects have highly improved and now contribute significantly to better storytelling. They can help in the creation of inventive and realistic settings, give people and creatures life, arouse the interest of the audience with dramatic and exciting scenes, and give the narrative greater depth. To give the spectator an immersive and captivating visual experience, a well-balanced VFX production must also pay attention to colour grading, composition, digital environment design, and sound effects. While composition aids in directing the viewer's eye and telling the story, colour grading improves the emotional tone and ambiance of the story. Realistic worlds are created by digital environment design, and impact is increased with sound effects. To get the best output, all these factors must be properly considered.
Reference list
Alfaro, S. (2021) ‘Why Color Grading is an Important Step in Filmmaking’, Eyone the sight, 31 January. Available at: www.beyondthesight.com. Available at: https://www.beyondthesight.com/color-grading-importance/ [Accessed 1 Feb. 2023].#
Cassidy, K. (2019). What is composition and why is it essential in filmmaking?, Videomaker, 16 July. Available at: https://www.videomaker.com/article/c02/18610-the-basic-rules-of-composition/#:~:text=Composition%20is%20the%20art%20of%20arranging%20objects%20in [Accessed 1 Feb. 2023].
Prince, S. (2012). Digital visual effects in cinema : the seduction of reality. New Brunswick, N.J.: Rutgers University Press.
www.hooksounds.com. (2022). How do Sound Effects Enhance a Film? | HookSounds. 24 August. Available at: https://www.hooksounds.com/blog/sound-effects-enhance-film/ [Accessed 1 Feb. 2023].