Introduction: Re-scoring a Trailer: Music Production and Sound Design
This project was developed for the University of Málaga as part of the Musical Acoustics course in the Sound & Image Engineering program. Its purpose is to produce an alternative version of the entire audio component of the 2017 presentation trailer for the video game "The Legend of Zelda: Breath of the Wild".
I first created a complete re-score of the trailer, which is an original musical composition intended as an alternative to the existing soundtrack (which I deliberately avoided listening to so as not to influence my own ideas).
After that, I produced and recorded the entire score using mainly orchestral sample libraries, building it instrument by instrument and track by track. Through detailed automation work and careful mixing, the result achieves a realistic timbral and expressive quality.
Finally, I carried out the sound design, recreating atmospheres and on-screen actions to provide the sense of naturalness and cohesion essential for a well-integrated audiovisual product.
Supplies
The tools used in this project were the following:
Software:
- Score editor: Sibelius Ultimate
- DAW: Logic Pro
- Sound libraries: Kontakt with Spitfire Symphony Orchestra
- Video editor: Final Cut Pro
Hardware:
- Audio interface: Behringer UMC1820
- Microphone: RØDE NT1-A
- Monitors: Yamaha HS5
- Headphones: ATH-M30x
Digital pianos / midi controllers:
- Digital piano and MIDI controller: Roland FP-90
- MIDI controller: KORG microKEY
Other instruments:
- Tin Whistle: Kerry Whistles Busker Tunable High D
- Djembe: Thomann NN32 Djembe V2
Step 1: Image Analysis
The first step was to import the trailer into the DAW, remove its audio, and watch it in detail. This allowed me to form my own interpretation of the narrative implied by the visuals.
Then, I marked all scene changes and other significant moments on the timeline. These appear as a list of SMPTE timecodes that later will help me determine sync points and guide the structural decisions of my score with precision.
Step 2: Musical Form
In projects like this, it is essential to establish the musical form before working on melody, harmony or orchestration. The video imposes strong temporal and narrative constraints that naturally suggest a structure, and the goal is always to respect and enhance that structure musically.
With the list of markers in hand, I designed tempo and meter changes in order to align key musical moments with the trailer’s visual sync points, such as section starts and endings, tension curves, and climaxes.
At this stage the form does not need to be final; it simply provides a solid base for composition and can be adjusted later if needed.
I created this temporal framework directly in the DAW because the visual layout is intuitive, although it could also be done in the score editor, where the music itself will eventually be written.
Step 3: Reduction
Once the temporal structure was set, I began composing the music in the form of a reduction. It is not meant to be a playable piano reduction, but rather a clear sketch of the entire piece including melody, harmony, rhythm, and early textural ideas.
For the melodic material, I chose several of the most recognizable themes and leitmotifs from the Zelda saga and placed them in moments where their narrative meaning could strengthen the scene. The remaining passages were filled with non-thematic material: simple rhythmic patterns, free melodic gestures, and textures that maintain continuity without overshadowing more significant musical events.
From that sketch I built a harmonic-rhythmic framework supporting the overall tension arc. During this process I also began forming expressive intentions for each section (some more heroic, others more intimate or massive) and preliminary ideas about instrumentation before the orchestration.
Step 4: Orchestration
In this context, orchestrating means transforming the reduction into a fully written score for orchestra, assigning all musical ideas to the instruments best suited to it, while shaping textures, timbres, and foreground/background layers. It is a task that requires both timbral knowledge and narrative clarity: each decision influences how the scene is perceived and how the music supports its dramatic weight.
Starting from the completed reduction, I distributed the lines across the orchestral families, adjusting ranges, articulations, and densities according to the expressive needs of each section and the practical performance characteristics of real instruments in a hypothetical recording session. Because of this, I formatted the score as a session score, the standard in film scoring.
This type of score prioritizes efficiency and legibility: large time signatures, bar numbers on every measure, no key signatures (to avoid confusion in modal or chromatic writing), transposing instruments written at concert pitch, and a layout designed for recording with a click track. These and other conventions are industry standards intended to make the recording session as smooth as possible, especially when working with musicians who may not have extensive rehearsal time.
Once everything was written and revised, the score was ready for recording, which in my case had to be done with orchestral sample libraries rather than real musicians (I wish!).
Attachments
Step 5: Recording
With the score complete, I returned to Logic to record all the instruments using MIDI performances played through Spitfire Symphony Orchestra. My workflow consisted of reading each instrumental part directly from the score and performing it on my MIDI controllers, aiming to reproduce natural phrasing and expression.
This requires detailed parameter automation: the modulation wheel for dynamics, note velocity for controlling legato and articulation behavior, low notes used by the library as a MIDI code to change playing techniques (keyswitches), and other manual adjustments as vibrato. Careful refinement of these parameters is what allows sample libraries to produce convincingly realistic interpretations.
Rendering this kind of project is demanding, since it requires considerable CPU bandwidth and RAM to load multiple orchestral instruments. The most efficient strategy is to work instrument by instrument to avoid system overload, unless one has a particularly powerful computer.
There were three instruments I did not record via MIDI but as audio: digital piano, tin whistle and djembe. For these I applied basic processing (compression, EQ, reverb and minor pitch/tempo corrections) so they would blend naturally with the orchestral samples.
Step 6: Mixing
The mixing stage focused largely on volume automation to maintain the intended sonic hierarchy established in the orchestration. This step is essential to preserve the musical intent: which elements should be foregrounded, which act as support, and where tension changes must be perceptible.
I also controlled the output level to maintain sufficient headroom, preventing distortion and ensuring a clean, balanced result.
Finally, I applied a simple, assisted mastering process using Logic’s built-in tool, adding subtle EQ and compression to enhance clarity, adjust dynamic range, and refine stereo consistency. Since this is an orchestral project, mastering is not as decisive as in other genres; its purpose here is mainly to add a final layer of polish.
Step 7: Sound Design
The last stage of the project was the sound design. This process involves recreating the acoustic environment of the video: ambience, actions, and any detail that enhances the sense of realism, so that the viewer perceives a cohesive audiovisual experience.
To achieve this, I selected royalty-free sound assets from various online sources (Pixabay, 101soundboards, YouTube, etc.), synchronized them with their corresponding visuals, and adjusted their levels so that they complemented the music without competing with it. In some cases I also used EQ to shape their spectral content to better match my intended aesthetic, or to simulate movement and distance.
The goal was to make music and image feel inseparable, creating a more immersive final product and reinforcing a subconscious sense of cohesion.
Step 8: Final Result
To finish the project, I exported the master audio track and integrated it into Final Cut together with the original trailer, adding an opening title. With everything synchronized, I exported the final video and uploaded it to YouTube.
In conclusion, this project allowed me to work through the entire audio workflow of an audiovisual production: narrative analysis of the video, composition, production, sound design, and final assembly. It has been a valuable exercise to deepen my understanding of musical production and to demonstrate my ability to handle a project of this kind from start to finish.


