abusing-mesh-networks-for-easy-remote-support.md

  1---
  2title: "(Ab)using mesh networks for easy remote support"
  3author: ["Amolith"]
  4date: 2021-11-01T02:51:00-04:00
  5lastmod: 2023-01-18T09:33:39-05:00
  6tags: ["Mesh networking", "Open source", "Remote support"]
  7categories: ["Technology"]
  8draft: false
  9toc: true
 10---
 11
 12One of the things many of us struggle with when setting friends and
 13family up with Linux is remote support. Commercial solutions like
 14[RealVNC](https://www.realvnc.com/) and [RustDesk](https://rustdesk.com/) do exist and function very well, but are often more
 15expensive than we would like for answering the odd "I can't get Facebook
 16open!" support call. I've been on the lookout for suitable alternatives
 17for a couple years but nothing has been satisfying. Because of this, I
 18have held off on setting others up with any Linux distribution, even the
 19particularly user-friendly options such as [Linux Mint](https://linuxmint.com/) and [elementary OS;](https://elementary.io/)
 20if I'm going drop someone in an unfamiliar environment, I want to be
 21able to help with any issue within a couple hours, not days and
 22_certainly_ not weeks.
 23
 24[Episode 421 of LINUX Unplugged](https://linuxunplugged.com/421) gave me an awesome idea to use [Nebula,](https://github.com/slackhq/nebula) a
 25networking tool created by Slack, [X11vnc,](https://libvnc.github.io/) a very minimal VNC server, and
 26[Remmina,](https://remmina.org/) a libre remote access tool available in pretty much every Linux
 27distribution, to set up a scalable, secure, and simple setup reminiscent
 28of products like RealVNC.
 29
 30
 31## Nebula {#nebula}
 32
 33The first part of our stack is Nebula, the tool that creates a network
 34between all of our devices. With traditional VPNs, you have a client
 35with a persistent connection to a central VPN server and other clients
 36can communicate with the first by going through that central server.
 37This works wonderfully in most situations, but there are a lot of
 38latency and bandwidth restrictions that would make remote support an
 39unpleasant experience. Instead of this model, what we want is a _mesh_
 40network, where each client can connect directly to one another _without_
 41going through a central system and slowing things down. This is where
 42Nebula comes in.
 43
 44In Nebula's terminology, clients are referred to as _nodes_ and central
 45servers are referred to as _lighthouses_, so those are the terms I'll use
 46going forward.
 47
 48Mesh networks are usually only possible when dealing with devices that
 49have static IP addresses. Each node has to know _how_ to connect with the
 50other nodes; John can't meet up with Bob when Bob moves every other day
 51without notifying anyone of his new address. This wouldn't be a problem
 52if Bob phoned Jill and told her where he was moving; John would call
 53Jill, Jill would tell him where Bob is, and the two would be able to
 54find each other
 55
 56With Nebula, nodes are Bob and John and Jill is a lighthouse. Each node
 57connects to a lighthouse and the lighthouse tells the nodes how to
 58connect with one another when they ask. It _facilitates_ the P2P
 59connection then _backs out of the way_ so the two nodes can communicate
 60directly with each other.
 61
 62It allows any node to connect with any other node on any network from
 63anywhere in the world, as long as one lighthouse is accessible that
 64knows the connection details for both peers.
 65
 66
 67### Getting started {#getting-started}
 68
 69The _best_ resource is [the official documentation,](https://github.com/slackhq/nebula) but I'll describe the
 70process here as well.
 71
 72After [installing the required packages,](https://github.com/slackhq/nebula#1-the-nebula-binaries-or-distribution-packages-for-your-specific-platform-specifically-youll-need-nebula-cert-and-the-specific-nebula-binary-for-each-platform-you-use) make sure you have a VPS with a
 73static IP address to use as a lighthouse. If you want something dirt
 74cheap, I would recommend one of the small plans from [BuyVM.](https://buyvm.net) I do have a
 75[referral link](https://my.frantech.ca/aff.php?aff=3783) if you want them to kick me a few dollars for your
 76purchase. [Hetzner](https://www.hetzner.com/cloud) (referral: `ckGrk4J45WdN`) or [netcup](https://www.netcup.eu/) (referral:
 77`36nc15758387844`) would also be very good options; I've used them all and
 78am very comfortable recommending them.
 79
 80
 81### Creating a Certificate Authority {#creating-a-certificate-authority}
 82
 83After picking a device with a static IP address, it needs to be set up
 84as a lighthouse. This is done by first creating a Certificate Authority
 85(CA) that will be used for signing keys and certificates that allow our
 86other devices into the network. The `.key` file produced by the following
 87command is incredibly sensitive; with it, anyone can authorise a new
 88device and give it access to your network. Store it in a safe,
 89preferably encrypted location.
 90
 91```bash
 92  nebula-cert ca -name "nebula.example.com"
 93```
 94
 95I'll explain why we used a Fully-Qualified Domain Name (FQDN) as the
 96CA's name in a later section. If you have your own domain, feel free to
 97use that instead; it doesn't really matter what domain is used as long
 98as the format is valid.
 99
100
101### Generating lighthouse credentials {#generating-lighthouse-credentials}
102
103Now that we have the CA's `.crt` and `.key` files, we can create and sign
104keys and certificates for the lighthouse.
105
106```bash
107  nebula-cert sign -name "buyvm.lh.nebula.example.com" -ip "192.168.100.1/24"
108```
109
110Here, we're using a FQDN for the same reason as we did in the CA. You
111can use whatever naming scheme you like, I just prefer
112`<vps-host>.lh.nebula...` for my lighthouses. The IP address can be on any
113of the following private IP ranges, I just happened to use `192.168.100.X`
114for my network.
115
116| IP Range                      | Number of addresses |
117|-------------------------------|---------------------|
118| 10.0.0.0 – 10.255.255.255     | 16 777 216          |
119| 172.16.0.0 – 172.31.255.255   | 10 48 576           |
120| 192.168.0.0 – 192.168.255.255 | 65 536              |
121
122
123### Creating a config file {#creating-a-config-file}
124
125The next step is creating our lighthouse's config file. The reference
126config can be found in [Nebula's repo.](https://github.com/slackhq/nebula/blob/master/examples/config.yml) We only need to change a few of
127the lines for the lighthouse to work properly. If I don't mention a
128specific section here, I've left the default values.
129
130The section below is where we'll define certificates and keys. `ca.crt`
131will remain `ca.crt` when we copy it over but I like to leave the node's
132cert and key files named as they were when generated; this makes it easy
133to identify nodes by their configs. Once we copy everything over to the
134server, we'll add the proper paths to the `cert` and `key` fields.
135
136```yaml
137  pki:
138    ca: /etc/nebula/ca.crt
139    cert: /etc/nebula/
140    key: /etc/nebula/
141```
142
143The next section is for identifying and mapping your lighthouses. This
144needs to be present in _all_ of the configs on _all_ nodes, otherwise they
145won't know how to reach the lighthouses and will never actually join the
146network. Make sure you replace `XX.XX.XX.XX` with whatever your VPS's
147public IP address is. If you've used a different private network range,
148those changes need to be reflected here as well.
149
150```yaml
151  static_host_map:
152    "192.168.100.1": ["XX.XX.XX.XX:4242"]
153```
154
155Below, we're specifying how the node should behave. It is a lighthouse,
156it should answer DNS requests, the DNS server should listen on all
157interfaces on port 53, it sends its IP address to lighthouses every 60
158seconds (this option doesn't actually have any effect when `am_lighthouse`
159is set to `true` though), and this lighthouse should not send reports to
160other lighthouses. The bit about DNS will be discussed later.
161
162```yaml
163  lighthouse:
164    am_lighthouse: true
165    serve_dns: true
166    dns:
167      host: 0.0.0.0
168      port: 53
169    interval: 60
170    hosts:
171```
172
173The next bit is about [hole punching](https://en.wikipedia.org/wiki/Hole_punching_%28networking%29), also called _NAT punching_, _NAT
174busting_, and a few other variations. Make sure you read the comments for
175better explanations than I'll give here. `punch: true` enables hole
176punching. I also like to enable `respond` just in case nodes are on
177particularly troublesome networks; because we're using this as a support
178system, we have no idea what networks our nodes will actually be
179connected to. We want to make sure devices are available no matter where
180they are.
181
182```yaml
183  punchy:
184    punch: true
185    respond: true
186    delay: 1s
187```
188
189`cipher` is a big one. The value _must_ be identical on _all_ nodes _and_
190lighthouses. `chachapoly` is more compatible so it's used by default. The
191devices _I_ want to connect to are all x86 Linux, so I can switch to `aes`
192and benefit from [a small performance boost.](https://www.reddit.com/r/networking/comments/iksyuu/comment/g3ra5cv/?utm_source=share&utm_medium=web2x&context=3) Unless you know _for sure_
193that you won't need to work with _anything_ else, I recommend leaving it
194set to `chachapoly`.
195
196```yaml
197  cipher: chachapoly
198```
199
200The last bit I modify is the firewall section. I leave most everything
201default but _remove_ the bits after `port: 443`. I don't _need_ the `laptop` and
202`home` groups (groups will be explained later) to access port `443` on this
203node, so I shouldn't include the statement. If you have different needs,
204take a look at the comment explaining how the firewall portion works and
205make those changes.
206
207Again, I _remove_ the following bit from the config.
208
209```yaml
210    - port: 443
211      proto: tcp
212      groups:
213        - laptop
214        - home
215```
216
217
218### Setting the lighthouse up {#setting-the-lighthouse-up}
219
220We've got the config, the certificates, and the keys. Now we're ready to
221actually set it up. After SSHing into the server, grab the [latest
222release of Nebula for your platform,](https://github.com/slackhq/nebula/releases/latest) unpack it, make the `nebula` binary
223executable, then move it to `/usr/local/bin` (or some other location
224fitting for your platform).
225
226```bash
227  wget https://github.com/slackhq/nebula/releases/download/vX.X.X/nebula-PLATFORM-ARCH.tar.gz
228  tar -xvf nebula-*
229  chmod +x nebula
230  mv nebula /usr/local/bin/
231  rm nebula-*
232```
233
234Now we need a place to store our config file, keys, and certificates.
235
236```bash
237  mkdir /etc/nebula/
238```
239
240The next step is copying the config, keys, and certificates to the
241server. I use `rsync` but you can use whatever you're comfortable with.
242The following four files need to be uploaded to the server.
243
244-   `config.yml`
245-   `ca.crt`
246-   `buyvm.lh.nebula.example.com.crt`
247-   `buyvm.lh.nebula.example.com.key`
248
249With `rsync`, that would look something like this. Make sure `rsync` is also
250installed on the VPS before attempting to run the commands though;
251you'll get an error otherwise.
252
253```bash
254  rsync -avmzz ca.crt user@example.com:
255  rsync -avmzz config.yml user@example.com:
256  rsync -avmzz buyvm.lh.* user@example.com:
257```
258
259SSH back into the server and move everything to `/etc/nebula/`.
260
261```bash
262  mv ca.crt /etc/nebula/
263  mv config.yml /etc/nebula/
264  mv buyvm.lh* /etc/nebula/
265```
266
267Edit the config file and ensure the `pki:` section looks something like
268this, modified to match your hostnames of course.
269
270```yaml
271  pki:
272    ca: /etc/nebula/ca.crt
273    cert: /etc/nebula/buyvm.lh.nebula.example.com.crt
274    key: /etc/nebula/buyvm.lh.nebula.example.com.key
275```
276
277Run the following command to make sure everything works properly.
278
279```bash
280  nebula -config /etc/nebula/config.yml
281```
282
283The last step is daemonizing Nebula so it runs every time the server
284boots. If you're on a machine using systemd, dropping the following
285snippet into `/etc/systemd/system/nebula.service` should be sufficient. If
286you're using something else, check the [the examples directory](https://github.com/slackhq/nebula/tree/master/examples/) for more
287options.
288
289```text
290  [Unit]
291  Description=nebula
292  Wants=basic.target
293  After=basic.target network.target
294  Before=sshd.service
295
296  [Service]
297  SyslogIdentifier=nebula
298  ExecReload=/bin/kill -HUP $MAINPID
299  ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
300  Restart=always
301
302  [Install]
303  WantedBy=multi-user.target
304```
305
306We're almost done!
307
308
309### Setting individual nodes up {#setting-individual-nodes-up}
310
311This process is almost exactly the same as setting lighthouses up. All
312you'll need to do is generate a couple of certs and keys then tweak the
313configs a bit.
314
315The following command creates a new cert/key for USER's node with the IP
316address `192.168.100.2`. The resulting files would go on the _remote_ node
317not yours. Replace `HOST` and `USER` with fitting values.
318
319```bash
320  nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24"
321```
322
323The following command will create a _similar_ cert/key but it will be part
324of the `support` group. The files resulting from this should go on _your_
325nodes. With the config we'll create next, nodes in the `support` group
326will be able to VNC and SSH into other nodes. Your nodes need to be in
327the `support` group so you'll have access to the others.
328
329```bash
330  nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24" -groups "support"
331```
332
333On to the config now. This tells the node that it is _not_ a lighthouse,
334it should _not_ resolve DNS requests, it _should_ ping the lighthouses and
335tell them its IP address every 60 seconds, and the node at `192.168.100.1`
336is one of the lighthouses it should report to and query from. If you
337have more than one lighthouse, add them to the list as well.
338
339```yaml
340  lighthouse:
341    am_lighthouse: false
342    #serve_dns: false
343    #dns:
344      #host: 0.0.0.0
345      #port: 53
346    interval: 60
347    hosts:
348      - "192.168.100.1"
349```
350
351The other bit that should be modified is the `firewall:` section and this
352is where the groups we created earlier are important. Review its
353comments and make sure you understand how it works before proceeding.
354
355We want to allow inbound connections on ports 5900, the standard port
356for VNC, and 22, the standard for SSH. Additionally, we _only_ want to
357allow connections from nodes in the `support` group. Any _other_ nodes
358should be denied access.
359
360Note that including this section is not necessary on _your_ nodes, those
361in the `support` group. It's only necessary on the remote nodes that
362you'll be connecting to. As long as the `outbound:` section in the config
363on _your_ node allows any outbound connection, you'll be able to access
364other nodes.
365
366```yaml
367    - port: 5900
368      proto: tcp
369      groups:
370      - support
371
372    - port: 22
373      proto: tcp
374      groups:
375      - support
376```
377
378The certs, key, config, binary, and systemd service should all be copied
379to the same places on all of these nodes as on the lighthouse.
380
381
382## X11vnc {#x11vnc}
383
384_Alright._ The hardest part is finished. Now on to setting `x11vnc` up on
385the nodes you'll be supporting.
386
387All you should need to do is install `x11vnc` using the package manager
388your distro ships with, generate a 20 character password with `pwgen -s
38920 1`, run the following command, paste the password, wait for `x11vnc` to
390start up, make sure it's running correctly, press `Ctrl` + `C`, then add the
391command to the DE's startup applications!
392
393```bash
394  x11vnc --loop -usepw -listen <nebula-ip> -display :0
395```
396
397`--loop` tells `x11vnc` to restart once you disconnect from the session.
398`-usepw` is pretty self-explanatory. `-listen <nebula-ip>` is important; it
399tells `x11vnc` to only listen on the node's Nebula IP address. This
400prevents randos in a coffee shop from seeing an open VNC port and trying
401to brute-force the credentials. `-display :0` just defines which X11
402server display to connect to.
403
404Some distributions like elementaryOS and those that use KDE and GNOME
405will surface a dialogue for managing startup applications if you just
406press the Windows (Super) key and type `startup`. If that doesn't work,
407you'll have to root around in the settings menus, consult the
408distribution's documentation, or ask someone else that might know.
409
410After adding it to the startup application, log out and back in to make
411sure it's running in the background.
412
413
414## Remmina {#remmina}
415
416Now that our network is functioning properly and the VNC server is set
417up, we need something that connects to the VNC server over the fancy
418mesh network. Enter [Remmina.](https://remmina.org/) This one goes on _your_ nodes.
419
420Remmina is a multi-protocol remote access tool available in pretty much
421ever distribution's package archive as `remmina`. Install it, launch it,
422add a new connection profile in the top left, give the profile a
423friendly name (I like to use the name of the person I'll be supporting),
424assign it to a group, such as `Family` or `Friends`, set the Protocol to
425`Remmina VNC Plugin`, enter the node's Nebula IP address in the Server
426field, then enter their username and the 20 character password you
427generated earlier. I recommend setting the quality to Poor, but Nebula
428is generally performant enough that any of the options are suitable. I
429just don't want to have to disconnect and reconnect with a lower quality
430if the other person happens to be on a slow network.
431
432Save and test the connection!
433
434If all goes well and you see the other device's desktop, you're done
435with the VNC section! Now on to SSH.
436
437
438## SSH {#ssh}
439
440First off, make sure `openssh-server` is installed on the remote node;
441`openssh-client` would also be good to have, but from what I can tell,
442it's not strictly necessary. You _will_ need `openssh-client` on _your_ node,
443however. If you already have an SSH key, copy it over to
444`~/.ssh/authorized_keys` on the remote node. If you don't, generate one
445with `ssh-keygen -t ed25519`. This will create an Ed25519 SSH key pair.
446Ed25519 keys are shorter and faster than RSA and more secure than ECDSA
447or DSA. If that means nothing to you, don't worry about it. Just note
448than this key might not interact well with older SSH servers; you'll
449know if you need to stick with the default RSA. Otherwise, Ed25519 is
450the better option. After key generation has finished, copy
451`~/.ssh/id_ed25519.pub` (note the `.pub` extension) from your node to
452`~/.ssh/authorized_keys` on the remote node. The file _without_ `.pub` is your
453_private_ key. Like the Nebula CA certificate we generated earlier, this
454is extremely sensitive and should never be shared with anyone else.
455
456Next is configuring SSH to only listen on Nebula's interface; as with
457`x11vnc`, this prevents randos in a coffee shop from seeing an open SSH
458port and trying to brute-force their way in. Set the `ListenAddress`
459option in `/etc/ssh/sshd_config` to the remote node's Nebula IP address.
460If you want to take security a step further, search for
461`PasswordAuthentication` and set it to `no`. This means your SSH key is
462_required_ for gaining access via SSH. If you mess up Nebula's firewall
463rules and accidentally give other Nebula devices access to this machine,
464they still won't be able to get in unless they have your SSH key. I
465_personally_ recommend disabling password authentication, but it's not
466absolutely necessary. After making these changes, run `systemctl restart
467sshd` to apply them.
468
469Now that the SSH server is listening on Nebula's interface, it will
470actually fail to start when the machine (re)boots. The SSH server starts
471faster than Nebula does, so it will look for the interface before Nebula
472has even had a chance to connect. We need to make sure systemd waits for
473Nebula to start up and connect before it tells SSH to start; run
474`systemctl edit --full sshd` and add the following line in the `[Unit]`
475section, above `[Service]`.
476
477```text
478  After=nebula.service
479```
480
481Even now, there's still a bit of a hiccup. Systemd won't start SSH until
482Nebula is up and running, which is good. Unfortunately, even after
483Nebula has started, it still takes a minute to bring the interface up,
484causing SSH to crash. To fix _this_, add the following line directly below
485`[Service]`.
486
487```text
488  ExecStartPre=/usr/bin/sleep 30
489```
490
491If the `sleep` executable is stored in a different location, make sure you
492use that path instead. You can check by running `which sleep`.
493
494When the SSH _service_ starts up, it will now wait an additional 30
495seconds before actually starting the SSH _daemon_. It's a bit of a hacky
496solution but it works™. If you come up with something better, please
497send it to me and I'll include it in the post! My contact information is
498at the bottom of [this site's home page.](/)
499
500After you've made these changes, run `systemctl daemon-reload` to make
501sure systemd picks up on the modified service file, then run `systemctl
502restart sshd`. You should be able to connect to the remote node from your
503node using the following command.
504
505```bash
506  ssh USER@<nebula-ip>
507```
508
509If you want to make the command a little simpler so you don't have to
510remember the IP every time, create `~/.ssh/config` on your node and add
511these lines to it.
512
513```text
514  Host USER
515    Hostname <nebula-ip>
516    User USER
517```
518
519Now you can just run `ssh USER` to get in. If you duplicate the above
520block for all of the remote nodes you need to support, you'll only have
521to remember the person's username to SSH into their machine.
522
523
524## Going further with Nebula {#going-further-with-nebula}
525
526This section explains why we used FQDNs in the certs and why the DNS
527resolver is enabled on the lighthouse.
528
529Nebula ships with a built-in resolver meant specifically for mapping
530Nebula node hostnames to their Nebula IP addresses. Running a public DNS
531resolver is very much discouraged because it can be abused in terrible
532ways. However, the Nebula resolver mitigates this risk because it _only_
533answers queries for Nebula nodes. It doesn't forward requests to any
534other servers nor does it attempt to resolve any domain other than what
535was defined in its certificate. If you use the example I gave above,
536that would be `nebula.example.com`; the lighthouse will attempt to resolve
537any subdomain of `nebula.example.com` but it will just ignore `example.com`,
538`nebula.duckduckgo.com`, `live.secluded.site`, etc.
539
540Taking advantage of this resolver requires setting it as your secondary
541resolver on any device you want to be able to resolve hostnames from.
542If you were to add the lighthouse's IP address as your secondary
543resolver on your PC, you could enter `host.user.nebula.example.com` in
544Remmina's server settings _instead of_ `192.168.1.2`.
545
546But how you do so is beyond the scope of this post!
547
548If you're up for some _more_ shenanigans later on down the line, you could
549set up a Pi-Hole instance backed by Unbound and configure Nebula as
550Unbound's secondary resolver. With this setup, you'd get DNS-level ad
551blocking _and_ the ability to resolve Nebula hostname. Pi-Hole would query
552Unbound for `host.user.nebula.example.com`, Unbound would receive no
553answer from the root servers because the domain doesn't exist outside of
554your VPN, Unbound would fall back to Nebula, Nebula would give it an
555answer, Unbound would cache the answer, tell Pi-Hole, Pi-Hole would
556cache the answer, tell your device, then your device would cache the
557answer, and you can now resolve any Nebula host!
558
559Exactly how you do _that_ is **_definitely_** beyond the scope of this post :P
560
561If you set any of this up, I would be interested to hear how it goes! As
562stated earlier, my contact information is at the bottom of the site's
563home page :)