Written with ChatGPT 4o.
I recently ran into a workflow challenge while recording video content: I wanted to capture both my camera feed and my screen display, but instead of overlaying them in OBS (as is typical for live streaming), I wanted to record them separately so I could later compose them freely in DaVinci Resolve. My goal was to keep both sources clean and flexible for editing—ideal for adding effects, cuts, or repositioning in post.
This post is the result of a conversation I had with ChatGPT 4o, and it really helped me streamline this process.
Here's what I wanted to do.
And here's the solution proposed by ChatGPT.
There are two main ways to approach this, depending on how much control you want:
ffmpeg
to split that file into two separate videos.Example FFmpeg command:
ffmpeg -i input.mp4 -filter_complex "[0:v]crop=960:1080:0:0[left]; [0:v]crop=960:1080:960:0[right]" -map "[left]" display.mp4 -map "[right]" camera.mp4
Adjust the resolution and crop values depending on your canvas layout.
Use the Source Record plugin for OBS:
This approach is cleaner and better if you plan to edit often.
That’s it! This workflow gives me much more flexibility when editing content—especially for tutorials, interviews, or any scenario where I want to fine-tune the composition in post. Hope this helps!
I enjoyed learning about Disney's sodium vapor background removal process, which is used in movies such as Mary Poppins (1964), Bedknobs and Broomsticks (1971), and Pete’s Dragon (1977).
This method works much better than green and blue chroma keys, but as the video experiment shows, it's much more challenging to achieve.
How to run Google Gemma 2B- and 7B-parameter instruct models locally on the CPU and the GPU on Apple Silicon Macs.
How to encode an image dataset to reduce its dimensionality and visualize it in the 2D space.
How to build a website with the Next.js React framework and TypeScript.
# TL;DR
npx create-next-app@latest --ts
cd my-app
npm run build
npm start
# ready - started server on 0.0.0.0:3000, url: http://localhost:3000
Here's a video in which I test if OpenAI's DALL-E can generate usable texture maps from an uploaded image.
This texture comes with one of Apple's project examples and the idea of generating textures with DALL-E came from Adam Watters on Discord.
My video on how to use the YouTube Data API v3 to upload videos to your channel from Python scripts and the command line is now on YouTube.
The video walks through how to create a project in the Google API Console, register an application, generate credentials, and use them in a Python script that can be called from the command-line interface to upload videos to your YouTube account with a web browser.
Create
conda create -n yt python=3.8 -y && conda activate yt
pip install google-api-python-client
pip install google_auth_oauthlib
python -c "from apiclient.discovery import build;print(build)"
As the upload_video.py sample is written in Python 2, there are minor edits that need to be done to upgrade to Python 3. (Here's the Python 3 version.)
from http import client
httplib.
with client.
except
and print
statements from Python 2 to 3.You can delete the previously created Python environment when you're done as follows.
conda remove -n yt --all -y