Hi Jimbo, thanks for the input. On smaller and shorter shoots visual sync works great. However, I am currently working on an international reality show and it runs like this:
Audio capture from 13 lapel sets, 2 room mics and a wireless feed from the ENG crew comes to my Laptop and desk via a Dante card and some other gear (RME etc).
We record in Nuendo Live, which syncs from a Rosendahl MIF timecode generator.
Every morning we sync the Rosendahl to realworld time via an LTC App on my Android phone which outputs a SMPTE signal through the headphone jack to an RCA connector.
The SMPTE clock starts (running realworld time) and all 7 HD cameras, 2 mobile audio bags and 4 iPads (via a wifi network) get synced to the audio system and then disconnected to run in Trigger & Freewheel mode. It's called JamSync.
After a full day's filming, the files are gathered and dumped into an editing suite which reads the timecode and lines all the takes from all the cameras up with the recorded audio from Nuendo by reading the SMPTE from all the files. It's frame accurate and we don't have to move anything except for the odd 1 frame drift over 12 hours of shooting.
That's how the big productions do it, which is great, but I was wondering if Sonar is reliable for this job.
post edited by LJB - 2017/07/20 15:42:04