By Ben.
One of our focuses at SciTech Culture is technology. As such, Steve and I use various pieces of tech in order to produce the content for the podcast and the platform in general. To give you an insight in to what we use, I thought I would provide an overview of the current tech that I personally use, both in producing content for SciTech Culture, and for overall use.
To produce the SciTech Culture podcast each week, I use the following tech on my end:
• MacBook Pro (Retina 13-inch, Late 2013)
• iPhone 6
• Nokia Lumia 520
• iPad Mini 2
• Rode Podcaster Microphone
• Generic Mini Speaker
• Sony Bravia 55” Television
• Apple TV 4th Generation
• Various DVDs to prop up the above equipment (yes, you heard that right!)
• A plethora of audio, USB and power cables
On Steve’s end, he only needs to get himself in front of a device loaded with Skype. If it has a good microphone attached, so much the better. He’s got the easy part!
So many devices to produce our podcast, and most of them are not visible within the image. The most you will see in each episode is part of the Rode Podcaster Microphone and the Mini Speaker. However, a lot more is going on behind the scenes to bring it all together.
The MacBook Pro is the workhorse of the operation, the central hub through which the podcast recording revolves around. Using QuickTime to screen capture, the MacBook records Steve’s Skype video feed at full screen as it comes through on the MacBook, forming the basis of his video. The MacBook simultaneously records the audio feed coming through the Rode Podcaster Microphone, matching it to the Skype screen capture. The microphone captures my audio directly as well as Steve’s audio which comes through the Mini Speaker. I position the microphone close to the speaker and my voice so both our audio feeds are recorded at a similar level, which I later tweak in post (crude, but effective!). In this way, I have a high quality synchronised audio and video feed to work from during editing.
The iPhone 6 records my video feed in HD-1080p. This video feed will need to be synchronised later in post with Steve’s video as the audio recorded on the iPhone is not usable. While the iPhone records my video feed, it simultaneously streams that feed wirelessly to the Apple TV so I can see my video feed on the Sony Bravia 55” Television which is sitting diagonally to my right. I can look at this TV to check I am framed correctly in the picture and to ensure nothing goes wrong with my feed, a particularly important feature given I have no other direct way of seeing what is being recorded on my iPhone 6.
The iPad Mini 2 is utilised for the show notes and to refer to web links throughout the course of the discussion, and is located just to my right at a handy height to read and interact from. In roughly the same position is the Nokia 520 which is used as a stop watch; although there are no arbitrary time limits on an episode recorded, there are times when it is useful to check how long the episode has run for as we are recording it. More often than not, it’s used to gauge whether or not we have gone for long enough; typically, we don’t want to have a discussion that runs under ten minutes or over twenty-five minutes. So if you see me looking off screen during an episode for some reason, it’s usually to refer to one of these two devices.
Once the recording has been completing, I import all the recorded elements in to Final Cut Pro X on the MacBook Pro and complete the final edit and master. I utilise Compressor to render out the final MP3 for the audio version. Post-production on each episode is relatively straight-forward as the aim is to produce a final video that accurately reflects the free-form conversation that Steve and I have with minimal edits, and these only occur when something goes wrong during the recording (a dropped Skype feed is a common culprit).
I do a final check of the video by airplaying the video from my MacBook Pro to the Sony TV via the Apple TV. I find having the ability to do this incredible given that for me it re-defines the concept of a traditional editing workstation, by incorporating some very handy wireless technologies to make use of any computer and/or screen that is available.
Once the final master video and audio files are complete, I upload them to our various channels; YouTube, Vimeo, iTunes and RSS feeds. Again, the MacBook Pro handles this with aplomb, handling simultaneous uploads without any issues. I perform backups and storage of all the master files and then remove the project and files from my MacBook Pro so it is ready to start the process all over again the next time Steve and I are ready to record. The posts to our social media feeds are typed in to Notes on my MacBook, which synchronises with my iPhone and then during the week, I post these from my iPhone.
When I think back to the good old days of the late 90s when digital production of content was just starting to become available, I find it amazing just how far we’ve come in two decades. Everything I described above would have been unthinkable back then, and especially for mainstream use. All of this technology works so seamlessly today, and it is within any one’s grasp, not just those that have an interest in doing this type of work. It used to be a struggle to get the technology to work, now we don’t even think about it. Where will we be in another twenty years?