Software development notes

View My GitHub Profile

27 March 2023

OpenBMC Development on an Apple M1 MacBook Pro

by Andrew

I’ve had an M1 MacBook Pro Max lying beside me for some time now, waiting for me to find a good workflow and migrate onto it.

As it turns out, sandbagging on migrating for 6 months or so turned out to be a bit of a blessing. The software ecosystem evolved quite a bit from when I first tried to figure out how I wanted to work with it. Until now I’ve been running Linux on various Lenovo Thinkpads and so I wanted a workflow with similar properties. I want to use Linux, but am constrained to continuing to run macOS, so that rules out e.g. Asahi.

However, we can run Linux under a hypervisor, and with that I had the following goals:

  1. Use Linux to maintain familiarity
  2. Run Linux as a native guest (aarch64) for minimal overhead
  3. Bridged networking for unhampered guest network access
  4. Keep critical data on the host and share it into the guest
  5. Secondary activities like web browsing and chat continue to happen on the host

Critical data in this case is essentially my $HOME from my Linux laptop. I want to keep the point of coherency on the host so I don’t accidentally wipe the critical data out by mindlessly blowing away a VM disk image. Essentially, the VM itself shouldn’t be special. Further, I’d like that $HOME data copied from my Thinkpad(s) to serve as my $HOME in the Linux guest.

To continue performing secondary activities under macOS, I plan to use and just ssh into the guest. This keeps all the copy/paste and click semantics working without too much hassle.

Previously I’d cooked up some horrific combination of QEMU, vde_vmnet for network bridging, and 9pfs for directory sharing betwen the host and the guest. There were many things that were wrong with this. Hooking QEMU up to vde_vmnet avoids the need to run QEMU as root for bridge networking, but externalises the network from the QEMU process. This appeared to cause some throughput issues. The throughput issues were compounded by fs-cache bugs for 9pfs in the guest Ubuntu kernel, which meant I had to rebuild the guest kernel to disable fs-cache. Further, it turned out that QEMU had 9pfs issues all of its own.

Software Bits

To get there this time around I’ve used:

  1. UTM 4.1.6 (75)
    1. Experimental support for Hypervisor Virtualization Framework (HVF)
    2. VirtioFS support
    3. Boot from ISO support for HVF guests
  2. Fedora 37


I’ve created a guest with the following configuration and resources:

  1. HVF Guest
  2. 8 cores
  3. 16GiB RAM
  4. 128GiB Storage

With that, I did a stock install of Fedora 37.


With Fedora 37 installed in the guest I needed a few tricks to get things working as I desired.

On the host

  1. Create a volume with case-sensitive APFS for my home data
  2. Sync Linux $HOME onto the case-sensitive volume
  3. Create a VirtioFS share of my home data in UTM

In the guest

Set up the directory share as my guest home directory

  1. mkdir /mnt/host
  2. echo 'share /mnt/host virtiofs rw,nofail 0 0' >> /etc/fstab
  3. rm -rf /home/$USER
  4. ln -s /mnt/host/$USER /home/$USER

Cater to the VirtioFS share mount mangling permissions (?)

Fixing this is a double win as we can keep the build artifacts inside the guest and save on some overhead out to the host.

  1. mkdir -p /var/tmp/bitbake/build
  2. ln -s /var/tmp/bitbake/build ~/src/openbmc/openbmc/build

Fix qemu-user so meson actually works under bitbake

Attempting to build OpenBMC in the guest eventually lead to meson complaining that the executables created by cross-compilers weren’t runnable. On the surface this seems fair, but due to ✨implementation details✨ of the meson support in Poky, running cross-built binaries on the build machine is necessary.

Once I eventually dug through all the bitbake and meson logs to find out what was actually being invoked, it turned out qemu-arm was producing the following error:

qemu-arm: Unable to reserve 0xffff0000 bytes of virtual
address space at 0x8000 (Success) for use as guest address
space (check your virtual memory ulimit setting,
min_mmap_addr or reserve less using -R option)

In the process of tracking this down I ended up sending a couple of minor patches to qemu. Anyway, the issue was largely solved by:

  1. sudo dnf install qemu-user
  2. echo 'vm.mmap_min_addr = 65536' > /etc/sysctl.d/qemu-arm.conf
  3. systemctl reboot

Other random things that went wrong

Various things went wrong in silly ways. As a bit of a record:

Fix up qemu submodules

I hadn’t done any upstream development in recent times and submodules had come, gone and shifted around:

  1. git submodule sync --recursive
  2. git submodule update --recursive

Deal with qemu build dependencies

  1. dnf builddep qemu

Fix up Github’s SSH host key

Github managed to publish their private SSH RSA host key right around the time I started this migration, so that needed fixing in my ~/.ssh/known_hosts.

Fix up time being in the past

Not sure what happened here, but I managed to freeze the VM several times and this may have contributed to the issue. The freezing seemed to begin after allocating 32GiB of memory to the VM, which is the entirety of the host RAM. It’s been stable since I reduced that to 16GiB.

  1. timedatectl set-ntp false
  2. timedatectl set-time 22:09
  3. timedatectl set-ntp true

How It Went

With all this configured I found I could build a full OpenBMC image in the guest in a bit over an hour, from cold caches. Taking that straight to the bank!