With this first post, I'm going to do my very best to carefully go through the steps that I take for the production of my newest song, "Troublemakers." It's a song that has been written for quite a few months, so this time we'll be skipping over the pre-production stuff like writing music and lyrics, and instead hopping right into the recording part of it all. For all future reference, I use Logic Pro X on a MacBook Pro and never use any other DAW's (Digital Audio Workstations) to record, simply because I've found Logic to be most user-friendly and straightforward.
This is what the project looks like as of now.
All of those green rectangles are MIDI information. MIDI is the standard (and the only that I know of) for sequencing and digitally recording music information. You plug in a controller like a keyboard or a touchpad controller and once you string the right stuff together in your DAW, your DAW becomes controllable through your controller via MIDI. That means you can play the software instruments through your controller like a piano, electric guitar, trumpet, etc, etc or trigger sequences of sound that you may have sampled or are already available to use from another plug-in like Kontakt or Omnisphere. That's just a small overview of its potential, though.
Because I'm not a multi-instrumentalist, I often use the software instruments included with Logic like the electric guitars and basses to play those parts. The trick with any instrument, though, is being able to replicate the style that an actual guitarist or bassist would play it in real life, and not just playing 8th notes or 16th notes every time you use them (unless you're going for that). The same goes for a woodwind or brass software instrument.
Most of the MIDI parts I have in here are for synths, though, because I don't have a keyboard that has those sounds on it in my room. And by using the synth software instruments in Logic, all of the plug-ins become configurable like the delay in order to keep it in sync with your project.
Because I've already gotten the arrangement done for the track, here's a quick overview of how it went. When I have a song that I've written, I figure out the music parts on the piano (the instrument I play). And so once that's done, I record the base piano part in Logic. I may or may not keep it, but I use it as an outline of the entire track to be able to play the sections with the other instruments later. After the piano is done, I go through each instrument individually and record them, whether it's the rhythm guitar, lead guitar, synths, or drums. So that part has already been done and each instrument has been laid out for each section.
For this particular song, I wrote this as a song to be played live with my band, so I also needed to record them playing their live instruments on the project. So I went ahead and had my friends come in to play the bass and guitar for the song.
So after the the software instruments have been recorded and laid out, the live instruments come in to play on the outline that's already been placed. In this case, the live instruments came in after the MIDi because I wrote the song and they weren't familiar with it yet, so they needed an outline. In other cases, we would first record the live instruments and then I would go back in and layer the MIDi on top of them accordingly.
The two blue tracks are the electric bass. There are two tracks because one was plugged in directly to my recording interface, and the second was being pulled from my mic that I placed in front of the bass cabinet. The advantage of this is that you have two contrasting tones from your different sources. The line-in is thinner, but clearer, while the mic'd track is warm, thick, but slightly muddier. Combining these two elements gives you a superb sound from the desired instrument (typically used only for basses and guitars).
After recording the bass, it wasn't a pleasant run-through of the entire song. When comparing MIDI to live recordings, there are obviously pros and cons to both. With MIDi, all the information is changeable, so if you mess up a certain part, you don't have to redo everything again. Instead, you can just change that one sour note or move a whole chord up a step. However, MIDI doesn't offer the same sound that you get with a live instrument, or the same technique that a real musician might have playing that instrument. Then with live recording, you have to get the part right or re-take it again. So from this image, you can see that we had to redo some parts. Luckily, we didn't have to re-record the entire song from start to finish. We were able to start at a chorus and just go from there until the end because Logic offers what is called "comping." With comping, you can choose the best parts of each take and combine them to make a better sounding combined take. A downside with that, though, is that your takes have to be on time so that it all syncs up when you pull from your different takes.
Next, we recorded the guitar. The guitar was recorded in two variants, the red and yellow. Because the bass had a dedicated XLR out on the cabinet, we were able to record the line-in and mic at the same time, on the same takes. The downside to that is that you only have one take, and so there isn't variation when you put the song together. It's the same reasoning why vocal tracks are doubled on certain parts of a song, mostly during a chorus. However, because the bass is an underlying rhythm instrument with the drums, it works best to just have one take of it. Because the guitar amp didn't have an XLR output, we had to mic the amp first and then record all the parts again but with the guitar going through the pedalboard, skipping the amp and just going straight to my interface. The red is through the amp, while the yellow is direct. The different smaller sections are just ad-lib parts that the guitar played to give more texture to the verses and because some parts he couldn't adjust the pedals between sections. Some parts were also doubled an octave up. So when you have a line-in and a mic take of the guitar and mold them together, you get a nice sound, versus only one take.
These are not mixed or EQ'd yet, but the comparison was shown to get the point across of the variability that comes with two different takes as well as two different outputs.
For right now, the project stands as that, with the MIDI information recorded, and the guitar and bass parts recorded as well. The next step would be to bounce all of the MIDI information so they become audio tracks which I find easier to mix. You can still EQ, compress, and do all that stuff with the MIDI (and some producers do), but I prefer to bounce the information so that the audio becomes set in stone and can then be molded like that. The next post will cover that as well as throwing on channel strip settings to get the sound I'm looking for!! Thanks for checking this out, and I hope you'll be back for more!