with the demise of ESXi, I am looking for alternatives. Currently I have PfSense virtualized on four physical NICs, a bunch of virtual ones, and it works great. Does Proxmox do this with anything like the ease of ESXi? Any other ideas?

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?

    Passing the PCI network card / device to the VM would make things more secure as the host won’t be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn’t required.

    I think there’s something wrong with your setup. One of my machines has a br0 and a setup like yours. 10-enp5s0.network is the physical “WAN” interface:

    root@host10:/etc/systemd/network# cat 10-enp5s0.network
    [Match]
    Name=enp5s0
    
    [Network]
    Bridge=br0 # -> note that we're just saying that enp5s0 belongs to the bridge, no IPs are assigned here.
    
    root@host10:/etc/systemd/network# cat 11-br0.netdev
    [NetDev]
    Name=br0
    Kind=bridge
    
    root@host10:/etc/systemd/network# cat 11-br0.network
    [Match]
    Name=br0
    
    [Network]
    DHCP=ipv4 # -> In my case I'm also requesting an IP for my host but this isn't required. If I set it to "no" it will also work.
    

    Now, I have a profile for “bridged” containers:

    root@host10:/etc/systemd/network# lxc profile show bridged
    config:
     (...)
    description: Bridged Networking Profile
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    (...)
    

    And one of my VMs with this profile:

    root@host10:/etc/systemd/network# lxc config show havm
    architecture: x86_64
    config:
      image.description: HAVM
      image.os: Debian
    (...)
    profiles:
    - bridged
    (...)
    

    Inside the VM the network is configured like this:

    root@havm:~# cat /etc/systemd/network/10-eth0.network
    [Match]
    Name=eth0
    
    [Link]
    RequiredForOnline=yes
    
    [Network]
    DHCP=ipv4
    

    Can you check if your config is done like this? If so it should work.

    • tofubl@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on br0, the host isn’t bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:

      user@edge:/etc/systemd/network$ cat wan0.network
      [Match]
      Name=br0
      
      [Network]
      DHCP=no
      LinkLocalAddressing=ipv4
      
      [Link]
      RequiredForOnline=no
      

      Thank you once again!

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        Oh, now I remembered that there’s ActivationPolicy= on [Link] that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.

        I thought it had something to do with the interface having an IP (…) LinkLocalAddressing=ipv4

        I’m not so sure it is about the interface having an IP… I believe your current LinkLocalAddressing=ipv4 is forcing the interface to get up since it has to assign a local IP. Maybe you can set LinkLocalAddressing=no and ActivationPolicy=always-up and see how it goes.