We’ve made the conscious decision not to emulate the old analog preview/program switcher metaphor. It is very confusing to non-broadcast people. It also no longer adds big value because you can directly tap on the video in the multiview you want to switch to and the automation layers and remote control surfaces let you define exactly what should happen when you press a button.
@Oliver_Boinx
I usually use Multiview “Program + 4 sources” and I really disagree this decision, because when you are using ML as multicamera, really need to have a big “next camera” preview, prior to switch it on air, in order to check focus, iris and composition.
With the small preview squares, it is really hard
You could use the windowed MultiView (2) on an external monitor, instead of the from top to bottom one. With same sized segments, 25% of the monitor. It’s not the solution you’re looking for, but better than the tiny windows.
It would be interesting to have a “layer sets” preview window with the ability to “pop out” multiple previews to know what it would look like if you hit various layer sets. I think that this would accomplish the requirements of @JMVBMW and “not to emulate the old analog preview/program switcher metaphor”. I am sure that this would require a LOT of processing power for even one preview (and even more for more that one preview), but I’m feeling pretty confident on my new M1 Max (maybe overconfident!)!
That is certainly something to consider for a future version.
Just for a bit of background: This would require running multiple render pipelines. This, of course, is the most CPU, GPU and memory consuming task, so every pop out Layer Set would basically double the memory, CPU and GPU load. Eventually we will be able to do this in the future with even more powerful Apple Silicon based Macs.
I hear you about the MIDI and there is a plan on how to do it. It might happen in 2022.
However, I would really recommend to try Remote Control Surfaces on an iPad. If your doucment is locked in so you do not need to access it during production, you can use the full screen multiview also on your single display. I’m currently writing a blog post on how I ran a handball game live stream. Using the Remote Control surface on iPad for graphics and the Mouse and multiview for switching.
I second @Oliver_Boinx 's suggestion for using remote control surfaces. When doing a larger show I always have 2 iPads, a laptop, and sometimes a phone with different control surfaces open. It also allows you easily have multiple people working together as necessary.
A new suggestion for a feature that every layer should have is the ability to (optionally) group layer variants, with a title a separate disclosure triangles and some sort of “live” indicator (but not button)… I find myself having layers with 10-15 variants and I have started using “fake” layer variants just titled “---- Group Name —” to divide the layer variants.
I’m eagerly awaiting an update too. I suspect most of their engineers been busy with their new release of Fotomagico and the inevitable issues any new release brings, but I’ll let @Oliver_Boinx or @Achim_Boinx comment!. Looking forward to any new release, but to be honest, the most recent Beta has been rock solid for me… and for, I think, the first time I’m using beta software in a production workflow–mostly because I have a new M1 Max Mac!
the ‘option’ to animate transformation from one variant to another (like the PIP window does) so that you can chose if you cut from one variant to the other or you transform with animation
ability to chose the animation speed from one variant to another
X, Y, Z rotation on all layers
shadow on all layers (with allowed shadow distance more than 108, which BTW why is that?)
…and finally…
if the Syphon Sender layer could allow to choose multiple layers output as inputs to mix it together would allow to manage video mixes like the audio mixes, very powerful.
M
I too would love to transition between variants without having to implement automation. I am constantly switching between variants on a live screen. Unfortunately the variant dissolves in but CUTs out because we’re HOT EXITING out of the previous variant instead of transitioning out.
you could syphon out a layer tracking the latest camera (somehow) and have it displayed by another application on macOS at full screen 4K on a 27" colour correct monitor.
this would involve using a sprite node in the QC file that makes up the layer graphics code, like the 3D placer layer uses, instead of a billboard node. IIRC Placers use billboards as they are more pixel accurate than sprites, not sure about computational overhead comparisons.