Finding the Sound and Bouncing MIDI
Last time, I did a quick overview of the project so far I and went over how I got the arrangement done with the MIDI instruments and live instruments. Now that the arrangement is done, what follows is to get "the sound" that you're looking for by applying plug-ins to each track you want to manipulate.
The first step in doing this is applying your EQ to you track.
The guitar got the makeover first. The sound I was looking for was brighter instead of deep and dirty, and because I didn't particularly like the sound that came from recording the guitar through the pedalboard and the amp (I'll have to remember to not use the pedalboard next time), I had to do extra molding to achieve the sound i wanted. When applying EQ, you're essentially removing and adding frequencies. There could be a frequency somewhere in the mid-range that makes an undesirable sound, hum, or buzz, and so you'd remove that by finding where it is on the chart and then dragging it on the bottom half of the EQ chart. By pulling the curves above, you're adding those frequencies. I pulled some of the low end off, mostly what can't be heard anyway because the bass takes over in those low frequencies and also boosted and removed some spots in the mid-range. The guitar falls mostly in there, so there's not much tweaking done in the low and high ends.
And of course, to get the most honest sound, you want to only be listening to the track that you're molding. In this instance, I only have the guitar soloed so that I can hear it exclusively in order to apply the right EQ because no track will be the same, and you wouldn't be able to hear the guitar really well if the other instruments are blaring in the background.
After the EQ, I wanted to further mold the sound of the guitar, so I added a plug-in that's exclusive to Logic called Amp Designer. Other DAW's may have a plug-in like this, but as far as I know, Logic is the only one that has this plug-in that is as solid and configurable.
What you do is you pick your amp head and cabinet, and you can tweak the knobs as you see fit as though it were an actual amp. You can add or pull bass, mid, or highs, adjust the presence and gain, or event switch on delays and reverbs built onto the amp heads. If you also want, you can adjust the position of the microphone on the cabinet and even change the mic itself, ranging from AKG 414's to Shure 57's. So I cycled through a couple of preset configurations and once I found one that I liked, I adjusted the sound with the knobs until I found the tone I was looking for. I also did the same EQ and Amp Designer application to the other line-in guitar take that I had, but I didn't use the same settings. I wanted each take to have contrasting tones that complimented each other because even without the plug-ins, there sounds were completely different, so I wouldn't be able to mimic the other if I tried.
Once I finished with the Amp modeling, I added reverb onto the track through a bus. A bus is where you can apply a plug-in or multiple plug-ins and route multiple tracks to the same bus in order to conserve processing power and keep things tidy. The guitar track went to a bus and I put on some reverb using Logic's Space Designer. After applying that, I had to fix a part in the verse that I didn't think was on the money yet. It was a picking sequence that sounded a bit off, but luckily it was easily fixable!
Logic has another feature called Flex Time that allows you to manipulate the timing and rhythm of tracks, and it has different variations for polyphonic timing, monophonic timing, or rhythmic timing (guitars, vocals, or drums, respectively). It also offers a couple more, but for my purposes, those are the 3 I mostly deal with. Picking a variation may seem redundant because it's timing, but it tells Logic that you're fixing the timing of a certain instrument so that it can more accurately find the transients in the track, which are what I call the "strong beats" of the track. That may not be the actual definition of what a transient is when being applied to recording, but it helps me to understand what I'm moving around each time I do it!
The transients on this guitar track are each of those lines visible on the waveform file. Although you can create a transient anywhere on the track by clicking there, those are the "strong beats" that Logic detected in the recording. For this job, I used polyphonic timing because obviously the guitar is a polyphonic instrument, though it can also be monophonic if you're only playing one note at a time. I soloed the track and played it alongside a metronome to better hear the difference of rhythm between notes so that I might be able to sync them up better. Although it wasn't terribly off-time, having the ability to correct the timing of the track greatly improves usability so that things like delay can be effectively used (which I ended using for this guitar track). I used Flex Time to fix parts where the picking became rushed in the faster sequences.
After the alterations to the guitar, the bass was up next. I applied EQ and also the Bass Amp Designer that Logic offers just like the Amp Designer for guitars. The one thing that I did differently for the bass was that I added parallel compression in order to give it more presence and thickness in the track. Because I also apply parallel compression to the drums, though, I put the compressor plug-in on a bus so that I could send all of those to the compressor later. From running these tracks to the bus individually, you can adjust how much compression each track gets because each bus send has a fader on it that allows you to choose how much signal gets sent to the bus.
To be honest, I made a preset for parallel compression because there's a lot of knobs and sliders here. If you want to achieve the same thing in your mix, then you're more than welcome to make your own preset from what I have here! The result definitely makes a difference in the tone of the instrument that you're running through the bus with.
After the guitar and bass have been worked with, I bounced the MIDI into individual audio files. Nothing too complicated there, actually! I just counted how many MIDI tracks I had and made the right number of audio tracks to bounce to, then just went down the list and converted them all to audio. So all of those green rectangles that you saw before have now been changed to permanent audio. However, at any time, I can always go back to the MIDI instruments (I just hide them to keep things clean) and make any changes and re-bounce the stuff if I need to, so it's semi-permanent.
And because the MIDI that I recorded was quantized (synced with the project), I can slice any of these audio regions anywhere I need to and they'll be in sync so I don't have to worry about using Flex Time to fix any mistakes!
That's a wrap for this portion, and when I get to making the next post, it'll cover mixing the tracks as well as bouncing the MIDI drum files and applying channel strip settings to those, much like I did for the guitar and bass. Cheers!
With this first post, I'm going to do my very best to carefully go through the steps that I take for the production of my newest song, "Troublemakers." It's a song that has been written for quite a few months, so this time we'll be skipping over the pre-production stuff like writing music and lyrics, and instead hopping right into the recording part of it all. For all future reference, I use Logic Pro X on a MacBook Pro and never use any other DAW's (Digital Audio Workstations) to record, simply because I've found Logic to be most user-friendly and straightforward.
This is what the project looks like as of now.
All of those green rectangles are MIDI information. MIDI is the standard (and the only that I know of) for sequencing and digitally recording music information. You plug in a controller like a keyboard or a touchpad controller and once you string the right stuff together in your DAW, your DAW becomes controllable through your controller via MIDI. That means you can play the software instruments through your controller like a piano, electric guitar, trumpet, etc, etc or trigger sequences of sound that you may have sampled or are already available to use from another plug-in like Kontakt or Omnisphere. That's just a small overview of its potential, though.
Because I'm not a multi-instrumentalist, I often use the software instruments included with Logic like the electric guitars and basses to play those parts. The trick with any instrument, though, is being able to replicate the style that an actual guitarist or bassist would play it in real life, and not just playing 8th notes or 16th notes every time you use them (unless you're going for that). The same goes for a woodwind or brass software instrument.
Most of the MIDI parts I have in here are for synths, though, because I don't have a keyboard that has those sounds on it in my room. And by using the synth software instruments in Logic, all of the plug-ins become configurable like the delay in order to keep it in sync with your project.
Because I've already gotten the arrangement done for the track, here's a quick overview of how it went. When I have a song that I've written, I figure out the music parts on the piano (the instrument I play). And so once that's done, I record the base piano part in Logic. I may or may not keep it, but I use it as an outline of the entire track to be able to play the sections with the other instruments later. After the piano is done, I go through each instrument individually and record them, whether it's the rhythm guitar, lead guitar, synths, or drums. So that part has already been done and each instrument has been laid out for each section.
For this particular song, I wrote this as a song to be played live with my band, so I also needed to record them playing their live instruments on the project. So I went ahead and had my friends come in to play the bass and guitar for the song.
So after the the software instruments have been recorded and laid out, the live instruments come in to play on the outline that's already been placed. In this case, the live instruments came in after the MIDi because I wrote the song and they weren't familiar with it yet, so they needed an outline. In other cases, we would first record the live instruments and then I would go back in and layer the MIDi on top of them accordingly.
The two blue tracks are the electric bass. There are two tracks because one was plugged in directly to my recording interface, and the second was being pulled from my mic that I placed in front of the bass cabinet. The advantage of this is that you have two contrasting tones from your different sources. The line-in is thinner, but clearer, while the mic'd track is warm, thick, but slightly muddier. Combining these two elements gives you a superb sound from the desired instrument (typically used only for basses and guitars).
After recording the bass, it wasn't a pleasant run-through of the entire song. When comparing MIDI to live recordings, there are obviously pros and cons to both. With MIDi, all the information is changeable, so if you mess up a certain part, you don't have to redo everything again. Instead, you can just change that one sour note or move a whole chord up a step. However, MIDI doesn't offer the same sound that you get with a live instrument, or the same technique that a real musician might have playing that instrument. Then with live recording, you have to get the part right or re-take it again. So from this image, you can see that we had to redo some parts. Luckily, we didn't have to re-record the entire song from start to finish. We were able to start at a chorus and just go from there until the end because Logic offers what is called "comping." With comping, you can choose the best parts of each take and combine them to make a better sounding combined take. A downside with that, though, is that your takes have to be on time so that it all syncs up when you pull from your different takes.
Next, we recorded the guitar. The guitar was recorded in two variants, the red and yellow. Because the bass had a dedicated XLR out on the cabinet, we were able to record the line-in and mic at the same time, on the same takes. The downside to that is that you only have one take, and so there isn't variation when you put the song together. It's the same reasoning why vocal tracks are doubled on certain parts of a song, mostly during a chorus. However, because the bass is an underlying rhythm instrument with the drums, it works best to just have one take of it. Because the guitar amp didn't have an XLR output, we had to mic the amp first and then record all the parts again but with the guitar going through the pedalboard, skipping the amp and just going straight to my interface. The red is through the amp, while the yellow is direct. The different smaller sections are just ad-lib parts that the guitar played to give more texture to the verses and because some parts he couldn't adjust the pedals between sections. Some parts were also doubled an octave up. So when you have a line-in and a mic take of the guitar and mold them together, you get a nice sound, versus only one take.
These are not mixed or EQ'd yet, but the comparison was shown to get the point across of the variability that comes with two different takes as well as two different outputs.
For right now, the project stands as that, with the MIDI information recorded, and the guitar and bass parts recorded as well. The next step would be to bounce all of the MIDI information so they become audio tracks which I find easier to mix. You can still EQ, compress, and do all that stuff with the MIDI (and some producers do), but I prefer to bounce the information so that the audio becomes set in stone and can then be molded like that. The next post will cover that as well as throwing on channel strip settings to get the sound I'm looking for!! Thanks for checking this out, and I hope you'll be back for more!
From Paper to Production
One of the greatest things that I get to do on a daily basis is make music. I mean, that's what musicians do! An actor acts on set, a doctor fixes up their patients, a musician creates music, and so on and so forth. But like any job, profession, or career, there is a lot that goes into the craft of whatever you're doing. For the last 5 years, I've had the pleasure of teaching myself the art of digital recording, having been doing songwriting for a few years beforehand. What pushed me to want to learn was my fascination with computer technology and all the cool software and the fact that I had music written down, but didn't have the cha-ching or the know-how to get a professional-sounding recording at the end of the day. Much like any person, I went for the cheaper alternative and tried to do it myself. The saying goes, "Give a man a fish, he will eat for a day. Teach a man to fish, and he will eat for a lifetime." Needless to say, I taught myself how to "fish," instead of relying on others to do it for me.
Fast forward the trials and errors, the software changes, and the equipment upgrades, and here I am today, writing and producing for myself and other clients. A lot of the learning I had, however, was of short tutorials of Youtubers instructing their subscribers how to do certain tricks and things in their projects. Those were very helpful, don't get me wrong, but I hardly saw a person actually going through the process of making a song. Needless to say, there are many steps even before you get to recording the music, but the recording part of it is a story of its own. So with this blog here, I'm going to do my very best to leave my footprints in the sand for anyone to read this and follow in those same steps for the projects that I show on here. Maybe not all of the same tricks will apply to each individual reader because of software differences, but most will be universally applicable. So without further ado, let's begin!