Single GPU with scripts that run before and after the VM is active to unload the GPU driver modules from the kernel.
I think this was my starting point and I had to do just a few small tweaks to get it right for my setup - i.e. unload and reload the precise set of kernel modules that block GPU passthrough on my machine.
https://gitlab.com/Karuri/vfio
At this point from a user experience p.o.v it’s not much different to dual booting, just with a different boot sequence. The main advantage though is that I can have the Windows OS on a small virtual harddrive for ease of backup/clone/restore and have game installs on a dedicated NVME that doesn’t need backing up
I’ve been 100% linux for my daily home computing for over a year now… With one exception… To be honest I didn’t even try particularly hard to make gaming work under Linux.
Instead I have a Windows VM - setup with full passthrough access to my GPU and it’s own NVME - just for Windows gaming. To my mind now it’s in the same category as running console emulation.
As soon as I click shutdown in windows, it pops me straight back into my Linux desktop.
I had some hard to track down intermittent network issues when I upgraded from LMDE5 to LMDE6 - the solution was to get a newer kernel from backports - its fairly painless…
Haha, funny you should say that, my friend I often share this platter with always orders an entire dish of Unadon on the side to compensate
Yep, especially surface mount lithium batteries - they’re very sensitive to the solder reflow profile being juuuust right
1440p for the win!