The old pattern held vmLocksMu to get/create a *sync.Mutex, then
released vmLocksMu before calling lock.Lock(). In the gap between
the two operations a concurrent goroutine could observe the entry,
and any future cleanup path that deleted map entries could let a
third goroutine create a fresh *sync.Mutex for the same ID — leaving
two callers holding independent locks with no mutual exclusion.
Fix: replace the manual map + vmLocksMu pair with sync.Map and
LoadOrStore. LoadOrStore is atomic at the map level: exactly one
*sync.Mutex wins for each VM ID, with no release-then-reacquire
gap between the lookup and the insert. vmLocksMu is removed.