abusing-mesh-networks-for-easy-remote-support.md

  1---
  2title: "(Ab)using mesh networks for easy remote support"
  3author: ["Amolith"]
  4cover: ./cover.png
  5date: 2021-11-01T02:51:00-04:00
  6lastmod: 2023-01-18T09:33:39-05:00
  7tags: ["Mesh networking", "Open source", "Remote support"]
  8categories: ["Technology"]
  9draft: false
 10toc: true
 11---
 12
 13One of the things many of us struggle with when setting friends and
 14family up with Linux is remote support. Commercial solutions like
 15[RealVNC](https://www.realvnc.com/) and [RustDesk](https://rustdesk.com/) do exist and function very well, but are often more
 16expensive than we would like for answering the odd "I can't get Facebook
 17open!" support call. I've been on the lookout for suitable alternatives
 18for a couple years but nothing has been satisfying. Because of this, I
 19have held off on setting others up with any Linux distribution, even the
 20particularly user-friendly options such as [Linux Mint](https://linuxmint.com/) and [elementary OS;](https://elementary.io/)
 21if I'm going drop someone in an unfamiliar environment, I want to be
 22able to help with any issue within a couple hours, not days and
 23_certainly_ not weeks.
 24
 25[Episode 421 of LINUX Unplugged](https://linuxunplugged.com/421) gave me an awesome idea to use [Nebula,](https://github.com/slackhq/nebula) a
 26networking tool created by Slack, [X11vnc,](https://libvnc.github.io/) a very minimal VNC server, and
 27[Remmina,](https://remmina.org/) a libre remote access tool available in pretty much every Linux
 28distribution, to set up a scalable, secure, and simple setup reminiscent
 29of products like RealVNC.
 30
 31## Nebula {#nebula}
 32
 33The first part of our stack is Nebula, the tool that creates a network
 34between all of our devices. With traditional VPNs, you have a client
 35with a persistent connection to a central VPN server and other clients
 36can communicate with the first by going through that central server.
 37This works wonderfully in most situations, but there are a lot of
 38latency and bandwidth restrictions that would make remote support an
 39unpleasant experience. Instead of this model, what we want is a _mesh_
 40network, where each client can connect directly to one another _without_
 41going through a central system and slowing things down. This is where
 42Nebula comes in.
 43
 44In Nebula's terminology, clients are referred to as _nodes_ and central
 45servers are referred to as _lighthouses_, so those are the terms I'll use
 46going forward.
 47
 48Mesh networks are usually only possible when dealing with devices that
 49have static IP addresses. Each node has to know _how_ to connect with the
 50other nodes; John can't meet up with Bob when Bob moves every other day
 51without notifying anyone of his new address. This wouldn't be a problem
 52if Bob phoned Jill and told her where he was moving; John would call
 53Jill, Jill would tell him where Bob is, and the two would be able to
 54find each other
 55
 56With Nebula, nodes are Bob and John and Jill is a lighthouse. Each node
 57connects to a lighthouse and the lighthouse tells the nodes how to
 58connect with one another when they ask. It _facilitates_ the P2P
 59connection then _backs out of the way_ so the two nodes can communicate
 60directly with each other.
 61
 62It allows any node to connect with any other node on any network from
 63anywhere in the world, as long as one lighthouse is accessible that
 64knows the connection details for both peers.
 65
 66### Getting started {#getting-started}
 67
 68The _best_ resource is [the official documentation,](https://github.com/slackhq/nebula) but I'll describe the
 69process here as well.
 70
 71After [installing the required packages,](https://github.com/slackhq/nebula#1-the-nebula-binaries-or-distribution-packages-for-your-specific-platform-specifically-youll-need-nebula-cert-and-the-specific-nebula-binary-for-each-platform-you-use) make sure you have a VPS with a
 72static IP address to use as a lighthouse. If you want something dirt
 73cheap, I would recommend one of the small plans from [BuyVM.](https://buyvm.net) I do have a
 74[referral link](https://my.frantech.ca/aff.php?aff=3783) if you want them to kick me a few dollars for your
 75purchase. [Hetzner](https://www.hetzner.com/cloud) (referral: `ckGrk4J45WdN`) or [netcup](https://www.netcup.eu/) (referral:
 76`36nc15758387844`) would also be very good options; I've used them all and
 77am very comfortable recommending them.
 78
 79### Creating a Certificate Authority {#creating-a-certificate-authority}
 80
 81After picking a device with a static IP address, it needs to be set up
 82as a lighthouse. This is done by first creating a Certificate Authority
 83(CA) that will be used for signing keys and certificates that allow our
 84other devices into the network. The `.key` file produced by the following
 85command is incredibly sensitive; with it, anyone can authorise a new
 86device and give it access to your network. Store it in a safe,
 87preferably encrypted location.
 88
 89```bash
 90nebula-cert ca -name "nebula.example.com"
 91```
 92
 93I'll explain why we used a Fully-Qualified Domain Name (FQDN) as the
 94CA's name in a later section. If you have your own domain, feel free to
 95use that instead; it doesn't really matter what domain is used as long
 96as the format is valid.
 97
 98### Generating lighthouse credentials {#generating-lighthouse-credentials}
 99
100Now that we have the CA's `.crt` and `.key` files, we can create and sign
101keys and certificates for the lighthouse.
102
103```bash
104nebula-cert sign -name "buyvm.lh.nebula.example.com" -ip "192.168.100.1/24"
105```
106
107Here, we're using a FQDN for the same reason as we did in the CA. You
108can use whatever naming scheme you like, I just prefer
109`<vps-host>.lh.nebula...` for my lighthouses. The IP address can be on any
110of the following private IP ranges, I just happened to use `192.168.100.X`
111for my network.
112
113| IP Range                      | Number of addresses |
114| ----------------------------- | ------------------- |
115| 10.0.0.0 – 10.255.255.255     | 16 777 216          |
116| 172.16.0.0 – 172.31.255.255   | 10 48 576           |
117| 192.168.0.0 – 192.168.255.255 | 65 536              |
118
119### Creating a config file {#creating-a-config-file}
120
121The next step is creating our lighthouse's config file. The reference
122config can be found in [Nebula's repo.](https://github.com/slackhq/nebula/blob/master/examples/config.yml) We only need to change a few of
123the lines for the lighthouse to work properly. If I don't mention a
124specific section here, I've left the default values.
125
126The section below is where we'll define certificates and keys. `ca.crt`
127will remain `ca.crt` when we copy it over but I like to leave the node's
128cert and key files named as they were when generated; this makes it easy
129to identify nodes by their configs. Once we copy everything over to the
130server, we'll add the proper paths to the `cert` and `key` fields.
131
132```yaml
133pki:
134  ca: /etc/nebula/ca.crt
135  cert: /etc/nebula/
136  key: /etc/nebula/
137```
138
139The next section is for identifying and mapping your lighthouses. This
140needs to be present in _all_ of the configs on _all_ nodes, otherwise they
141won't know how to reach the lighthouses and will never actually join the
142network. Make sure you replace `XX.XX.XX.XX` with whatever your VPS's
143public IP address is. If you've used a different private network range,
144those changes need to be reflected here as well.
145
146```yaml
147static_host_map:
148  "192.168.100.1": ["XX.XX.XX.XX:4242"]
149```
150
151Below, we're specifying how the node should behave. It is a lighthouse,
152it should answer DNS requests, the DNS server should listen on all
153interfaces on port 53, it sends its IP address to lighthouses every 60
154seconds (this option doesn't actually have any effect when `am_lighthouse`
155is set to `true` though), and this lighthouse should not send reports to
156other lighthouses. The bit about DNS will be discussed later.
157
158```yaml
159lighthouse:
160  am_lighthouse: true
161  serve_dns: true
162  dns:
163    host: 0.0.0.0
164    port: 53
165  interval: 60
166  hosts:
167```
168
169The next bit is about [hole punching](https://en.wikipedia.org/wiki/Hole_punching_%28networking%29), also called _NAT punching_, _NAT
170busting_, and a few other variations. Make sure you read the comments for
171better explanations than I'll give here. `punch: true` enables hole
172punching. I also like to enable `respond` just in case nodes are on
173particularly troublesome networks; because we're using this as a support
174system, we have no idea what networks our nodes will actually be
175connected to. We want to make sure devices are available no matter where
176they are.
177
178```yaml
179punchy:
180  punch: true
181  respond: true
182  delay: 1s
183```
184
185`cipher` is a big one. The value _must_ be identical on _all_ nodes _and_
186lighthouses. `chachapoly` is more compatible so it's used by default. The
187devices _I_ want to connect to are all x86 Linux, so I can switch to `aes`
188and benefit from [a small performance boost.](https://www.reddit.com/r/networking/comments/iksyuu/comment/g3ra5cv/?utm_source=share&utm_medium=web2x&context=3) Unless you know _for sure_
189that you won't need to work with _anything_ else, I recommend leaving it
190set to `chachapoly`.
191
192```yaml
193cipher: chachapoly
194```
195
196The last bit I modify is the firewall section. I leave most everything
197default but _remove_ the bits after `port: 443`. I don't _need_ the `laptop` and
198`home` groups (groups will be explained later) to access port `443` on this
199node, so I shouldn't include the statement. If you have different needs,
200take a look at the comment explaining how the firewall portion works and
201make those changes.
202
203Again, I _remove_ the following bit from the config.
204
205```yaml
206- port: 443
207  proto: tcp
208  groups:
209    - laptop
210    - home
211```
212
213### Setting the lighthouse up {#setting-the-lighthouse-up}
214
215We've got the config, the certificates, and the keys. Now we're ready to
216actually set it up. After SSHing into the server, grab the [latest
217release of Nebula for your platform,](https://github.com/slackhq/nebula/releases/latest) unpack it, make the `nebula` binary
218executable, then move it to `/usr/local/bin` (or some other location
219fitting for your platform).
220
221```bash
222wget https://github.com/slackhq/nebula/releases/download/vX.X.X/nebula-PLATFORM-ARCH.tar.gz
223tar -xvf nebula-*
224chmod +x nebula
225mv nebula /usr/local/bin/
226rm nebula-*
227```
228
229Now we need a place to store our config file, keys, and certificates.
230
231```bash
232mkdir /etc/nebula/
233```
234
235The next step is copying the config, keys, and certificates to the
236server. I use `rsync` but you can use whatever you're comfortable with.
237The following four files need to be uploaded to the server.
238
239- `config.yml`
240- `ca.crt`
241- `buyvm.lh.nebula.example.com.crt`
242- `buyvm.lh.nebula.example.com.key`
243
244With `rsync`, that would look something like this. Make sure `rsync` is also
245installed on the VPS before attempting to run the commands though;
246you'll get an error otherwise.
247
248```bash
249rsync -avmzz ca.crt user@example.com:
250rsync -avmzz config.yml user@example.com:
251rsync -avmzz buyvm.lh.* user@example.com:
252```
253
254SSH back into the server and move everything to `/etc/nebula/`.
255
256```bash
257mv ca.crt /etc/nebula/
258mv config.yml /etc/nebula/
259mv buyvm.lh* /etc/nebula/
260```
261
262Edit the config file and ensure the `pki:` section looks something like
263this, modified to match your hostnames of course.
264
265```yaml
266pki:
267  ca: /etc/nebula/ca.crt
268  cert: /etc/nebula/buyvm.lh.nebula.example.com.crt
269  key: /etc/nebula/buyvm.lh.nebula.example.com.key
270```
271
272Run the following command to make sure everything works properly.
273
274```bash
275nebula -config /etc/nebula/config.yml
276```
277
278The last step is daemonizing Nebula so it runs every time the server
279boots. If you're on a machine using systemd, dropping the following
280snippet into `/etc/systemd/system/nebula.service` should be sufficient. If
281you're using something else, check the [the examples directory](https://github.com/slackhq/nebula/tree/master/examples/) for more
282options.
283
284```text
285[Unit]
286Description=nebula
287Wants=basic.target
288After=basic.target network.target
289Before=sshd.service
290
291[Service]
292SyslogIdentifier=nebula
293ExecReload=/bin/kill -HUP $MAINPID
294ExecStart=/usr/local/bin/nebula -config /etc/nebula/config.yml
295Restart=always
296
297[Install]
298WantedBy=multi-user.target
299```
300
301We're almost done!
302
303### Setting individual nodes up {#setting-individual-nodes-up}
304
305This process is almost exactly the same as setting lighthouses up. All
306you'll need to do is generate a couple of certs and keys then tweak the
307configs a bit.
308
309The following command creates a new cert/key for USER's node with the IP
310address `192.168.100.2`. The resulting files would go on the _remote_ node
311not yours. Replace `HOST` and `USER` with fitting values.
312
313```bash
314nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24"
315```
316
317The following command will create a _similar_ cert/key but it will be part
318of the `support` group. The files resulting from this should go on _your_
319nodes. With the config we'll create next, nodes in the `support` group
320will be able to VNC and SSH into other nodes. Your nodes need to be in
321the `support` group so you'll have access to the others.
322
323```bash
324nebula-cert sign -name "HOST.USER.nebula.example.com" -ip "192.168.100.2/24" -groups "support"
325```
326
327On to the config now. This tells the node that it is _not_ a lighthouse,
328it should _not_ resolve DNS requests, it _should_ ping the lighthouses and
329tell them its IP address every 60 seconds, and the node at `192.168.100.1`
330is one of the lighthouses it should report to and query from. If you
331have more than one lighthouse, add them to the list as well.
332
333```yaml
334lighthouse:
335  am_lighthouse: false
336  #serve_dns: false
337  #dns:
338  #host: 0.0.0.0
339  #port: 53
340  interval: 60
341  hosts:
342    - "192.168.100.1"
343```
344
345The other bit that should be modified is the `firewall:` section and this
346is where the groups we created earlier are important. Review its
347comments and make sure you understand how it works before proceeding.
348
349We want to allow inbound connections on ports 5900, the standard port
350for VNC, and 22, the standard for SSH. Additionally, we _only_ want to
351allow connections from nodes in the `support` group. Any _other_ nodes
352should be denied access.
353
354Note that including this section is not necessary on _your_ nodes, those
355in the `support` group. It's only necessary on the remote nodes that
356you'll be connecting to. As long as the `outbound:` section in the config
357on _your_ node allows any outbound connection, you'll be able to access
358other nodes.
359
360```yaml
361- port: 5900
362  proto: tcp
363  groups:
364    - support
365
366- port: 22
367  proto: tcp
368  groups:
369    - support
370```
371
372The certs, key, config, binary, and systemd service should all be copied
373to the same places on all of these nodes as on the lighthouse.
374
375## X11vnc {#x11vnc}
376
377_Alright._ The hardest part is finished. Now on to setting `x11vnc` up on
378the nodes you'll be supporting.
379
380All you should need to do is install `x11vnc` using the package manager
381your distro ships with, generate a 20 character password with `pwgen -s
38220 1`, run the following command, paste the password, wait for `x11vnc` to
383start up, make sure it's running correctly, press `Ctrl` + `C`, then add the
384command to the DE's startup applications!
385
386```bash
387x11vnc --loop -usepw -listen <nebula-ip> -display :0
388```
389
390`--loop` tells `x11vnc` to restart once you disconnect from the session.
391`-usepw` is pretty self-explanatory. `-listen <nebula-ip>` is important; it
392tells `x11vnc` to only listen on the node's Nebula IP address. This
393prevents randos in a coffee shop from seeing an open VNC port and trying
394to brute-force the credentials. `-display :0` just defines which X11
395server display to connect to.
396
397Some distributions like elementaryOS and those that use KDE and GNOME
398will surface a dialogue for managing startup applications if you just
399press the Windows (Super) key and type `startup`. If that doesn't work,
400you'll have to root around in the settings menus, consult the
401distribution's documentation, or ask someone else that might know.
402
403After adding it to the startup application, log out and back in to make
404sure it's running in the background.
405
406## Remmina {#remmina}
407
408Now that our network is functioning properly and the VNC server is set
409up, we need something that connects to the VNC server over the fancy
410mesh network. Enter [Remmina.](https://remmina.org/) This one goes on _your_ nodes.
411
412Remmina is a multi-protocol remote access tool available in pretty much
413ever distribution's package archive as `remmina`. Install it, launch it,
414add a new connection profile in the top left, give the profile a
415friendly name (I like to use the name of the person I'll be supporting),
416assign it to a group, such as `Family` or `Friends`, set the Protocol to
417`Remmina VNC Plugin`, enter the node's Nebula IP address in the Server
418field, then enter their username and the 20 character password you
419generated earlier. I recommend setting the quality to Poor, but Nebula
420is generally performant enough that any of the options are suitable. I
421just don't want to have to disconnect and reconnect with a lower quality
422if the other person happens to be on a slow network.
423
424Save and test the connection!
425
426If all goes well and you see the other device's desktop, you're done
427with the VNC section! Now on to SSH.
428
429## SSH {#ssh}
430
431First off, make sure `openssh-server` is installed on the remote node;
432`openssh-client` would also be good to have, but from what I can tell,
433it's not strictly necessary. You _will_ need `openssh-client` on _your_ node,
434however. If you already have an SSH key, copy it over to
435`~/.ssh/authorized_keys` on the remote node. If you don't, generate one
436with `ssh-keygen -t ed25519`. This will create an Ed25519 SSH key pair.
437Ed25519 keys are shorter and faster than RSA and more secure than ECDSA
438or DSA. If that means nothing to you, don't worry about it. Just note
439than this key might not interact well with older SSH servers; you'll
440know if you need to stick with the default RSA. Otherwise, Ed25519 is
441the better option. After key generation has finished, copy
442`~/.ssh/id_ed25519.pub` (note the `.pub` extension) from your node to
443`~/.ssh/authorized_keys` on the remote node. The file _without_ `.pub` is your
444_private_ key. Like the Nebula CA certificate we generated earlier, this
445is extremely sensitive and should never be shared with anyone else.
446
447Next is configuring SSH to only listen on Nebula's interface; as with
448`x11vnc`, this prevents randos in a coffee shop from seeing an open SSH
449port and trying to brute-force their way in. Set the `ListenAddress`
450option in `/etc/ssh/sshd_config` to the remote node's Nebula IP address.
451If you want to take security a step further, search for
452`PasswordAuthentication` and set it to `no`. This means your SSH key is
453_required_ for gaining access via SSH. If you mess up Nebula's firewall
454rules and accidentally give other Nebula devices access to this machine,
455they still won't be able to get in unless they have your SSH key. I
456_personally_ recommend disabling password authentication, but it's not
457absolutely necessary. After making these changes, run `systemctl restart
458sshd` to apply them.
459
460Now that the SSH server is listening on Nebula's interface, it will
461actually fail to start when the machine (re)boots. The SSH server starts
462faster than Nebula does, so it will look for the interface before Nebula
463has even had a chance to connect. We need to make sure systemd waits for
464Nebula to start up and connect before it tells SSH to start; run
465`systemctl edit --full sshd` and add the following line in the `[Unit]`
466section, above `[Service]`.
467
468```text
469After=nebula.service
470```
471
472Even now, there's still a bit of a hiccup. Systemd won't start SSH until
473Nebula is up and running, which is good. Unfortunately, even after
474Nebula has started, it still takes a minute to bring the interface up,
475causing SSH to crash. To fix _this_, add the following line directly below
476`[Service]`.
477
478```text
479ExecStartPre=/usr/bin/sleep 30
480```
481
482If the `sleep` executable is stored in a different location, make sure you
483use that path instead. You can check by running `which sleep`.
484
485When the SSH _service_ starts up, it will now wait an additional 30
486seconds before actually starting the SSH _daemon_. It's a bit of a hacky
487solution but it works™. If you come up with something better, please
488send it to me and I'll include it in the post! My contact information is
489at the bottom of [this site's home page.](/)
490
491After you've made these changes, run `systemctl daemon-reload` to make
492sure systemd picks up on the modified service file, then run `systemctl
493restart sshd`. You should be able to connect to the remote node from your
494node using the following command.
495
496```bash
497ssh USER@<nebula-ip>
498```
499
500If you want to make the command a little simpler so you don't have to
501remember the IP every time, create `~/.ssh/config` on your node and add
502these lines to it.
503
504```text
505Host USER
506  Hostname <nebula-ip>
507  User USER
508```
509
510Now you can just run `ssh USER` to get in. If you duplicate the above
511block for all of the remote nodes you need to support, you'll only have
512to remember the person's username to SSH into their machine.
513
514## Going further with Nebula {#going-further-with-nebula}
515
516This section explains why we used FQDNs in the certs and why the DNS
517resolver is enabled on the lighthouse.
518
519Nebula ships with a built-in resolver meant specifically for mapping
520Nebula node hostnames to their Nebula IP addresses. Running a public DNS
521resolver is very much discouraged because it can be abused in terrible
522ways. However, the Nebula resolver mitigates this risk because it _only_
523answers queries for Nebula nodes. It doesn't forward requests to any
524other servers nor does it attempt to resolve any domain other than what
525was defined in its certificate. If you use the example I gave above,
526that would be `nebula.example.com`; the lighthouse will attempt to resolve
527any subdomain of `nebula.example.com` but it will just ignore `example.com`,
528`nebula.duckduckgo.com`, `live.secluded.site`, etc.
529
530Taking advantage of this resolver requires setting it as your secondary
531resolver on any device you want to be able to resolve hostnames from.
532If you were to add the lighthouse's IP address as your secondary
533resolver on your PC, you could enter `host.user.nebula.example.com` in
534Remmina's server settings _instead of_ `192.168.1.2`.
535
536But how you do so is beyond the scope of this post!
537
538If you're up for some _more_ shenanigans later on down the line, you could
539set up a Pi-Hole instance backed by Unbound and configure Nebula as
540Unbound's secondary resolver. With this setup, you'd get DNS-level ad
541blocking _and_ the ability to resolve Nebula hostname. Pi-Hole would query
542Unbound for `host.user.nebula.example.com`, Unbound would receive no
543answer from the root servers because the domain doesn't exist outside of
544your VPN, Unbound would fall back to Nebula, Nebula would give it an
545answer, Unbound would cache the answer, tell Pi-Hole, Pi-Hole would
546cache the answer, tell your device, then your device would cache the
547answer, and you can now resolve any Nebula host!
548
549Exactly how you do _that_ is **_definitely_** beyond the scope of this post :P
550
551If you set any of this up, I would be interested to hear how it goes! As
552stated earlier, my contact information is at the bottom of the site's
553home page :)