Sighting: Third Rail Repertory’s Tech Table

February 10th, 2011 by Christopher

Sam Kusnetz just sent in this gem from Third Rail Repertory:


Tech table

(Click to enlarge.)

Editor’s note: Oops, we got confused about which tech table this is. Sam also works at PCS, but this one is from Third Rail Rep.

Rental Licenses Make Me Happy

January 13th, 2011 by Christopher

Because they let our customers do stuff like this:

Folks,

This week I found myself in need of a cueing package suddenly for a 4-day event. My budget really wouldn’t have permitted me to acquire a package of QLab’s sophistication; but your rental plan saved the day.

I will certainly shout the virtues of your product to anyone I can. The client said, “Outstanding work, flawless execution!” Thanks so much for making me appear to be a professional in an affordable manner!

Sean Stewart

(Edited to Add) More kind words about rentals from Seitu Coleman:

Just emailing you all at QLab to express my complete love of your program as well as your innovative licensing structure. Being able to create a show off-line then “renting” a license for a few days is a tremendous benefit to my workflow. I have yet to hear about any Pro software that allows you to rent a license, did you all invent that? I am always telling other techs about QLab, keep up the good work.

Thanks guys!

Sighting: the Amara Zee

December 19th, 2010 by Christopher

My friends, I’d like to introduce you to the Amara Zee:

amara-zee.png

She’s a ship.

She’s a theatre ship.

She’s a 30 meter. Custom built. Theatre. Ship.

Here she is in the Croatian city of Vis:

az-viz.png

Here she is in Hvar:

az-hvar.png

Here she is in Trogir:

az-trogir.png

By the end of her tour this summer, she visited 11 Croatian cities, 1 Slovenian, and 3 Italian.

Now you, being the smart cookie that you are, have already figured out that the audio and video on the Amara Zee are powered by QLab.

But wait…. what’s…. what’s that there?

az-deck.jpg

No, not the giant heads… that thing in the back there…. here, look at it from this side:

az-side.jpg

Is that…. could that possibly be….

az-flag.jpg

OOOHHH SNAP BITCHEZ

WE ON A BOAT

<sniff> Ah gee… that’s our little guy up there! Traveling the world! What a lucky little fella!

Sighting: Making a QLab Go Button Box

December 17th, 2010 by Christopher

Chris Mower writes to share this great homebrew trigger project:

Working the sound cues on the Mac in a darkened theatre was making me a bit uneasy as there were far too many opportunities to hit the wrong key or touch the mouse pad.

Having to take your eyes off the script to double check that your finger was in the correct place ready to trigger the cue meant that you then had to scramble to find your place ready for the cue to come up.

I decided that it would be nice to have a simple “GO” button like there is on the lighting dek. The result of this is given here for anyone else that has a spare USB keyboard of any kind and is moderately competent with a soldering iron.

Page_2.jpeg
Page_3.jpeg
Page_4.jpeg

Very cool!

Musical Interlude

December 6th, 2010 by Christopher

Toolbox: Flip camera, Final Cut Express, ScreenFlow, Soundflower, WireTap Studio, and QLab.

Music: Hey Pocky A-Way, The Meters. (Amazon, iTunes)

Sighting: Extreme Vegas

November 30th, 2010 by Christopher

Justino Zoppe just wrote in to share his experience using QLab for Extreme Vegas. I gotta say, this is pretty hott. Check it:

I just wanted to tell you how incredibly happy I am with your outstanding software! I am the producer of “Extreme Vegas” currently performing here on board the Norwegian Star. We opened our show last night using QLab for the first time.

We ran all 180 lighting cues, all music, all video, and even 2 separate tracks of production calls through additional channels of audio that was sent to the spot light operators using in-ear headphones and also a unique track sent to the stage automation tech that would use my voice pre-recorded as a production manager who was calling the show live. Everything was in perfect sync, and we used hot keys to control the live sections of the 60 min show.

I have never been able to get lighting techs to nail audio cues for lighting effects. Now we have lightning cues, and accents to musical effects that sync perfectly all because of your outstanding software.

I use MSC and then fire a group of 60 or so cues…until the next section of the show. All cues fire children at the same time and then I just program a pre-wait till the exact time of firing that cue.

Anyway I just wanted to say well done…it was so worth the money and I’m very happy to have your software as part of our production. I am also adding you to the credits of the show that appear on the 40′ screen after the show.

Thanks
Justino Zoppe
www.ExtremeVegas.biz

The Magic Bullet Part I: The Hardware

November 18th, 2010 by luckydave
luckydave — projection designer, QLab support ace, and our local video guru — talks us through his optimization process for video playback.

In this post I will discuss video design workflows and the strategies I’ve found that work well with QLab (and most other show control software).

I’ll start with two very important axioms:

  1. There is no Magic Bullet;
  2. There is no Easy Way.

Whenever we create our original artwork, or just throw together a quick edit of some stock footage, we’re always looking for a way to keep moving in tech rehearsal. We don’t want to slow down the director’s (and our own) creative flow. We want collaboration to move at the speed of ideas, and our stupid computers keep getting in the way of that. Yup.

In those moments, I like to remind myself that I’m not working with film; that when I think of a “layer,” I don’t mean literally stacking negatives on top of each other, and hoping it comes out right in the water wash. That is to say, sure, encoding time and the classic, “Give me a half hour while that clip renders,” are frustrating. But sometimes it helps to remember that we’re working in a field that’s still only gotten as far as its toddler first-steps. Even with the frustration of waiting for a render, it’s still better than saying, “I’ll work that out tonight, and we can look at it in the morning.” And losing a night’s sleep to dark room chemical fumes.

It’s in those moments, when all I have to hold onto is how much less frustrating it is now that how it was a few short years ago, that I wish there were an Easy Way. One trick, that when executed correctly would mean speedy, efficient collaboration with no one staring at my tech table, wishing I could catch up. I want the Magic Bullet — that one codec that works in every situation and solves all of my jerky playback problems, and means I can bundle one show and send a disk on tour, regardless of what computer the show will see at the next venue. It’s also in those moments that I remind myself of the two above-mentioned axioms. And I steel myself for the groans, and I ask for some time. “Maybe we can come back to this in an hour?” Again.

So, how do we work around those two rules? How do we find an easier way? A practical bullet? Therein can be found the scope of this discussion.

I’ll approach this from a couple of angles, and keep in mind that all of my information is based on research and experience – that is to say, your mileage may vary. And, to extend the driving metaphor, the first challenge I’d like to wrestle with is bottlenecks. I’ll get to codecs in another post.

The Bottlenecks

When working with video for live playback from a hard disk (narrowing the scope to QLab, as opposed to DVD’s, internet video, or other delivery methods), we need to think about our hardware, and what we can do with our files to cause as few traffic jams as possible. This starts with the Hard Disk.

How fast is your hard drive? What’s the disk’s connection’s bandwidth? What other traffic is holding back your video file’s data path? In a perfect world, every computer your show will see will have a Solid State drive, a SATA connection, and every video file will be accessed independently of any other data at the moment of playback. In a practical world, we often have to satisfy ourselves with a 7,200 rpm “platter” hard disk drive, FireWire 400, and a couple of files being called for simultaneously. Hopefully, we do our best to ensure that every system we use at least fits those minimum requirements. Because here’s the thing: video files are huge. Like, imposingly large files. And getting them moving efficiently is the key to good playback.

The key here is that hard disk drives have only one point of data access at a time. Most of the data gets stored in entirely different physical locations, from one file to another, and the drive needs to rotate to that point, let the reading head grab those ones and zeros, and rotate to another point to grab the next chunk of data. If the operating system, and the QLab program, and the video files are all stored on the same disk, then there are loads of reading and writing processes being asked for simultaneously, and one poor little hard disk just can’t keep up, no matter how fast it can spin. So, step one: move your media files to a separate physical drive from your system. Two data paths means less bottlenecking.

Here’s a weird concept: video files are so huge that it’s probably more efficient to run the system off of an external drive, and store the media files on an internal. I would never setup a system like that myself, because a bumped cable would mean catastrophic failure. But it’s an interesting idea. FireWire 400 transfers data at a nominal rate of 400 million bits per second, or 400 Mbps. An internal eSATA connection, like on your MacBook Pro, transfers at a nominal 1.5 Gbps, or more than 3 times faster. The system and QLab are doing some reading and writing, but not nearly as much reading as a single video file. So, prioritizing your data stream to video is a nice idea. In practice, this means storing your media files on an external FW 400 or 800 drive, which is usually sufficient. Of course, if you’re trying to access four videos at the same time, you’ll probably benefit from putting those files on more than one disk. Keep in mind, if your FW 400 and FW 800 connection are on the same internal buss, or if you’re daisy-chaining drives, you’re only gaining fluidity in access, not transfer. The files can be read independently, but they’ll still need to get that data through the same path, which means hitting the same bottleneck. In practice, the bottleneck of the buss is less important than the gain in access speed from having multiple sources.

OK, we’ve covered the hard drive, and its associated bottlenecks: speed, the access conundrum, and the data path. We’re ready to prioritize appropriately and manage our traffic accordingly. What’s next?

The other relevant hardware all comes in one big jumble, oddly enough. The CPU, GPU, RAM and vRAM all work with each other, in several confusing ways, to get your imagery from the hard drive to the display. The basic data path, after the hard drive, goes something like this: RAM > CPU > (v)RAM > GPU > Display. I’ll start with that questionable looking fellow there, the (v)RAM.

If you’ve created one cue in QLab, referencing one video file, and you’re sending it to one display patch, your video frames travel straight from the decoding process, to the vRAM, to the display. If your cue is told to show up on more than one display patch, that breaks the data flow a little bit, and the frames have to be written to the RAM, and then distributed to the appropriate vRAM – each graphics card has its own vRAM, so if a cue only needs one display, it only needs one card, so it can go straight to vRAM. If it needs multiple displays, it needs to hit the distribution center, which slows down the process. What’s the trick here? If you want to send one file to multiple displays, you’ll benefit from creating one cue per display. Simple enough.

Now, if you’re half the geek I am, you’re wondering why poor old GPU is stuck at the end of the data path. Why isn’t the GPU, which is made for such things, tasked with decoding video files? Well, Apple only allows H.264 files to be decoded on the GPU, and even that is limited to OSX 10.6, for our purposes. Similar constraints lend to QLab being stuck as a 32-bit application for the time being. Since QLab wants to be accessible to as many of you designers as possible, it has to embrace some somewhat outdated technologies, and wait for Apple to give us the tools we need to embrace the newer, more powerful stuff. One thing you can be sure of — when we have access, we’ll be embracing it! For now, there is no hardware-accelerated playback for movie files in QLab, and it’s relegated to sharing only 4GB of RAM with all of the other 32-bit applications on the system.

“Only 4GB of RAM? But my video files are much larger than 4GB!” Fortunately, there’s a point where QuickTime, OSX’s playback engine, has prepared a nice road for us. We’re not entirely sure of the specifics, since it’s proprietary information for Apple to keep locked in a bunker, but we are sure of a few things. When QLab calls for a file to be loaded into RAM, the whole video file doesn’t get dumped in one big chunk. It’s being accessed as necessary, and as RAM becomes available. QuickTime and the OS manage that for us, and they do a pretty good job of it. There’s one point where your bottlenecks are handled for you. Also, your files have to be stored somewhere while they’re being decoded, and that location is the RAM. If they don’t have to come back to the RAM (as described above), they’ll move from there to the display pretty smoothly.

So what’s left? CPU and GPU. The CPU’s role is to decode the files you’ve made. The GPU’s role is to make your adjustments, and send the frames to the display. So, here’s where your choice of codec can be incredibly important, and where your programming can create a challenge for your system.

We’ll get into codecs in a future post, but for now, keep in mind that the more you’ve compressed a video file, the more your CPU will have to work to decompress it, and that can create a bottleneck. The less you compress it, the lighter the load on the CPU, and the smoother it will flow through this part of our data path.

The GPU then gets frames from vRAM, after they’ve been decoded and put there for holding. At this point, if you’ve made scaling adjustments, rotated your image, changed its opacity, or moved it around, the GPU has to figure out how that affects the frame that it’s sending the display, and then send it out. It’s at this point that I often create an unnecessary bottleneck for myself. And I’m going to suggest a workflow that includes sufficient tech time here, because I think QLab is actually two different tools at this stage.

I try to create my artwork at a scale that will work in the space, but obviously, what I’m seeing in my imagination, before projecting material on a set (for instance), will not exactly match my needs once in the theater. So, I love Custom Geometry in QLab, because it lets me rescale and rotate and position my artwork to fit my needs. But if I’m scaling my artwork up by a factor of 1.15, that means that at playback, poor old GPU is having to make that calculation to every frame it sends to the display. And that’s a more difficult calculation than a factor of 2 (computers really like 2′s). In tech rehearsal, I tell the director that we may see some glitches in playback for this reason, and once I’ve corrected my artwork in QLab, I’ll re-render it overnight to match the scaling adjustments I’ve made, and tomorrow, we’ll see it play back smoothly. Basically, the strategy involves being prepared for imperfection in rehearsal, and pre-rendering the adjustments I’ve made, once I’m satisfied with what I have. QLab in rehearsal is another tool in the composition process. QLab at playback should have all of its media already optimized as much as possible. And then, the GPU only has to position and fade the artwork, which it’s very comfortable with doing in real-time.

All of these strategies and workflow adjustments mean we’re looking at the equipment we have to work with, and creating media files that are optimized for the system at hand, not for the ideal system. We’re also doing everything we can to optimize our system, with the understanding that our budget and time constraints will hamper perfection in the equipment we’re able to use. So, we’re preparing as much as possible off-site, and planning at every stage for the tools we’ll have to use once we walk away from the show and hand it off to the stage manager and operator.

Understanding the hardware perspective on all of this can certainly help to create a system that will bring us closer to playback perfection. Next, how to prepare the files with the same goal. Stay tuned for Magic Bullet Part II: The Codec.

QLab at the Royal Museum in Stockholm

August 19th, 2010 by Christopher

Andreas Grill writes to say:

I am using QLab at the Royal Museum here in Stockholm. The show is fully automatic, meaning that any person could start it by just pressing a button. It will run for over 1350 times when it’s finished in October and so far we did not have a single crash. QLab is playing 8 channels of surround sound, video, light via MIDI and is controlled via a MIDI Solutions F8. The show is made in both Swedish and English but is identical. You can see some pictures here:

http://www.livrustkammaren.se/default.asp?id=6779&ptid=&refid=7358

Awesome Andreas!

(Here’s Google’s translation of the museum’s page into English.)

QLab controls Blaze

July 12th, 2010 by Christopher

Memo Akten wrote to tell how he is using QLab to coordinate this awesome streetdance show. Check it out:

Blaze from Memo Akten on Vimeo.

The show consisted of 18 sections (group cues), each with its own MTC. Some of these cues would roll onto the next automatically, others were triggered manually after audience applause or depending on improvisation on stage. Each of these groups had a few cues, to trigger audio, send MTC, and trigger visuals via a midi note (visuals were running on custom media server written in openframeworks running on a separate machine). Also notes and reminders for directing the cast and backstage. The same MTC was driving the lighting desk and the visuals, but due to delay in projection etc. we were also using a midi note to trigger the visuals and compensate for the delay between projection and audio (trial and error showed this to be around 0.4-0.5 seconds depending on venue). We used AppleScript in QLab to automate updating of all the cues (I think it was you who I emailed to ask about that actually!).

Nice work Memo!

QLab feeding Clear Com

June 21st, 2010 by Christopher

Lighting designer Matt Mills recently told us about a creative way he is using QLab in his lighting work:

Hello, I was looking at your site and thought I’d tell you how I am using your product. I am the Lighting Director for the band Daughtry and I use QLab to call all my spotlight cues. My lighting console is able to send MIDI so I came up with the idea of having it trigger some prerecorded clips during the show. I’m coming out of my lighting console into my Mac via a USB to Midi adapter, then coming out of my Mac headphone port into my Clear Com base station via a 1/8 to XLR adapter. From there I put that into the program feed of the Clear Com base station. I have QLab configured with all my pre recorded spot calls, about 35 of them, and I have my console triggering them at the appropriate time. It has been working out great, been doing it for about 30 shows now. And the spot light operators love it because there is never an open mic picking up all the audio from the show. Just thought I’d share this with you. Great product, Matt

photo.jpg

IMG_0314.JPG

IMG_0316.jpg

IMG_0319.JPG

I thought that was pretty clever. Thanks for sharing it Matt!

Entries (RSS) and Comments (RSS).