gallery.badmanners.xyz/src/data/blog/supercharged-ssh-apps-on-sish.mdx
2024-12-05 21:43:04 -03:00

545 lines
30 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Supercharged SSH applications on sish
pubDate: 2024-09-23
isAgeRestricted: false
authors: bad-manners
thumbnail: /src/assets/thumbnails/blog/supercharged_ssh_apps_on_sish.png
description: |
After discovering the joys of a reverse port forwarding-based architecture, I dig even deeper into SSH itself to make the most of it through code.
prev: ssh-all-the-way-down
posts:
mastodon: https://meow.social/@BadManners/113188858859021367
tags:
- technical post
- programming
- game development
---
import { Image } from "astro:assets";
import TocMdx from "@components/TocMdx.astro";
import imageSishPublic from "@assets/images/ssh/sish_public.png";
import imageVpsArchitectureSish from "@assets/images/ssh/vps_architecture_sish.png";
import imageOverviewSshTunnel from "@assets/images/ssh/overview_ssh_tunnel.png";
import imageRusshAxum from "@assets/images/ssh/russh_axum.png";
import imageCheckboxes from "@assets/images/ssh/checkboxes.png";
import imageMultipaintByNumbers from "@assets/images/ssh/multipaint_by_numbers.png";
This is my second post investigating SSH, and learning what it has to offer. And this time, with some actual code.
<TocMdx headings={getHeadings()} />
## A quick refreSSHer
In my [last post](/blog/ssh-all-the-way-down), I went over a saga of trying to self-host some of my services, before eventually landing on [sish](https://github.com/antoniomika/sish/), an SSH-based tunneling solution. The basic idea is that we can use [reverse port forwarding](https://goteleport.com/blog/ssh-tunneling-explained/) to expose a local socket to the Internet.
<figure>
<Image
src={imageSishPublic}
alt="Diagram entitled 'sish public', showing that Eric's machine with a service exposed on localhost port 3000 connects to sish via the command (ssh -R eric:80:localhost:3000 tuns.sh). This creates a bidirectional tunnel and exposes https://eric.tuns.sh to the Internet, which Tony accesses from a separate device."
/>
<figcaption>
With a simple reverse shell command, and a configured sish instance, we can expose anything to the Internet. ©
Antonio Mika
</figcaption>
</figure>
In fact, we can host anything TCP-based as long as we can create a secure shell to sish, even if it's running on the same host machine.
<figure>
<Image
src={imageVpsArchitectureSish}
alt="Diagram showing a VPS host with SSH, HTTP(S), and TCP exposed to the outside world by sish, as it is internally connected through by git.badmanners.xyz via SSH. A Raspberry Pi serving booru.badmanners.xyz connects via SSH as well, while another computer sends an HTTP request for any service. All services connect through SSH and sish handles any reverse proxying within itself."
/>
<figcaption>
The architecture I ended up with. Services inside or outside the VPS both leverage SSH in order to expose
themselves.
</figcaption>
</figure>
In a way, this is a two-fold solution. sish provides us with:
1. A [reverse proxy](https://en.wikipedia.org/wiki/Reverse_proxy), which will handle and route any incoming traffic to the correct applications.
2. A [hole-punching technique](<https://en.wikipedia.org/wiki/Hole_punching_(networking)>), letting us overcome any limitations that NAT imposes.
But as of now, all of our applications look something like this (in [Docker Compose](https://docs.docker.com/compose/) configuration):
<figure>
{/* prettier-ignore-start */}
```yaml
networks:
nginx:
external: false
services:
server:
image: nginx:1.27.1-alpine3.20
container_name: nginx
networks:
- nginx
autosish:
image: docker.io/badmanners/autosish:0.1.0
container_name: nginx_autosish
volumes:
- ./ssh_secret:/secret
networks:
- nginx
command: |
-i /secret/id_ed25519
-o StrictHostKeyChecking=accept-new
-R test.badmanners.xyz:80:server:8080
sish.top
```
{/* prettier-ignore-end */}
<figcaption>A basic server connecting to sish via a persistent reverse SSH tunnel.</figcaption>
</figure>
We have two images running for our application. One (`server`) is the actual web server, while the other (`autosish`) handles the reverse port forwarding for us.
It makes sense to have this separation into two images, if our application only uses an HTTP socket, and if it isn't aware of the SSH tunneling shenanigans going on... which is most of the applications.
But if we build our _own_ application, we could interact directly with SSH instead. In that case, how deep can we really integrate it with the existing architecture...?
In this post, we will work on a tunnel-aware application, and finding out more about the SSH protocol. Be forewarned that there will be plenty of [Rust](https://www.rust-lang.org/) code ahead!
## Reversing expectations
But first, how does remote port forwarding through an SSH tunnel work?
Better yet, how does _an SSH tunnel_ even work?
In the previous post, I explained that SSH is an [application-layer protocol](https://en.wikipedia.org/wiki/Application_layer) that runs on top of TCP. Our SSH server is listening on a port (usually 22) and we are able to connect to it.
SSH has its own protocols and implementations as expected, but so long as it's implemented properly by a library, we should be able to connect without the default SSH client.
<figure>
<Image
src={imageOverviewSshTunnel}
alt="Diagram showing a client connecting to an SSH server through bidirectional SSH. The connection is on the server's port 22. Over the SSH connection there is an encrypted session labeled 'tunnel'."
/>
<figcaption>A simplified overview of SSH.</figcaption>
</figure>
So, let's try connecting to the server ourselves, without the `ssh -R` magic to guide us!
For that, I'll be using a comprehensive Rust library called [russh](https://docs.rs/russh/latest/russh/). It's an [async](https://en.wikipedia.org/wiki/Async/await) library that relies on [Tokio](https://tokio.rs/), so we will also bring in an appropriate async HTTP library later on.
Right, let's get to it!
As a starting point, we need to connect our client to the server. This is simple enough:
```rust
use std::path::PathBuf;
use anyhow::{Context, Result};
use clap::Parser;
use russh::client;
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
struct ClapArgs {
// ... removed for clarity ...
}
#[tokio::main]
async fn main() -> Result<()> {
let args = ClapArgs::parse();
let config = Arc::new(client::Config {
..Default::default()
});
let client = Client {};
let session = client::connect(config, (args.host, args.port), client)
.await
.with_context(|| "Unable to connect to remote host.")?;
println!("Connected.");
Ok(())
}
```
If you're unfamiliar with Rust, this might be a lot at once. In summary, we're doing two things: at the top, we import any dependencies we need; and at the bottom, inside `async fn main()`, we're setting up a client connection with `client::connect()`.
Aside from this code, we also need to define the `Client` struct, which will be responsible for answering messages created by the server. This will implement the `russh::client::Handler` async trait, responsible for exposing our user-defined methods to the ones that the library knows to call.
```rust
use async_trait::async_trait;
struct Client;
#[async_trait]
impl client::Handler for Client {
type Error = anyhow::Error;
#[allow(unused_variables)]
async fn check_server_key(
&mut self,
server_public_key: &key::PublicKey,
) -> Result<bool, Self::Error> {
// Accept any key fingerprint from the server.
Ok(true)
}
}
```
With these two pieces, we can try running our program with cargo like this:
```sh
cargo run -- -R test -p 80 -i ~/.ssh/id_ed25519 sish.top
```
We're using [clap](https://docs.rs/clap/latest/clap/), an argument parser, to read the flags that are passed. The way it's set up, you can read this as being equivalent to the other command that we've seen so far:
```sh
ssh -R test:80:localhost:???? -i ~/.ssh/id_ed25519 sish.top
```
(Notice that we no longer specify the local port to forward remote connections to; we'll get to that later.)
When we run this, it prints `Connected.` on our terminal, then immediately exits the program.
At least we're doing...not nothing.
You might've realized that the connection is being ignored right after we create it. The `client::connect()` function simply establishes the TCP socket and checks for the server's key fingerprint, through the single method that we've implemented on the `Client` struct.
As you may have guessed from the identity file being passed to the command, we'll have to use that to authenticate now. So after we create a connection, we'll immediately call `session.authenticate_publickey()` to do so via public key cryptography. I've cut the repeated portion of the program below with a `snip`:
```rust
use std::path::PathBuf;
use russh::keys::{decode_secret_key, key};
#[tokio::main]
async fn main() -> Result<()> {
// ... snip ...
let secret_key = fs::read_to_string(args.identity_file)
.await
.with_context(|| "Failed to open identity file")?;
let secret_key = decode_secret_key(&secret_key, None)
.with_context(|| "Invalid secret key")?;
if session
.authenticate_publickey(args.login_name, Arc::new(secret_key))
.await
.with_context(|| "Error while authenticating with public key.")?
{
println!("Public key authentication succeeded!");
} else {
return Err(anyhow!("Public key authentication failed."));
}
Ok(())
}
```
If your key is authorized with the given server, this will print `Public key authentication succeeded!` after connecting, then immediately exit again. Not a lot of progress, but bear with me.
So connecting and authenticating is straightforward enough. You might draw a parallel with first connecting to a website, then logging in with your credentials. What comes after you log in is a bit more free-form, and depends on what you intend to do on the website.
With SSH, the analogy still holds. After creating a session, there are many options for what we can do next ([all of them available in russh](https://docs.rs/russh/0.45.0/russh/client/struct.Session.html)):
- `request_pty()`: Request that an interactive [pseudo-terminal](https://en.wikipedia.org/wiki/Pseudoterminal) is created by the server, allowing us to enter commands over a remote shell session. This is the most common use case for SSH.
- `request_x11()`: Request that an [X11 connection](https://en.wikipedia.org/wiki/X_Window_System) is displayed over the Internet. This lets us access graphical applications through SSH!
- `tcpip_forward()`: Requests TCP/IP forwarding from the server.
Wait, that last one sounds exactly like what we've been looking for! So let's try it out...
```rust
#[tokio::main]
async fn main() -> Result<()> {
// ... snip ...
session
.tcpip_forward(args.remote_host, args.remote_port.into())
.await
.with_context(|| "tcpip_forward error.")?;
println!("tcpip_forward requested.");
Ok(())
}
```
Once again, it succeeds and binds on the provided host and port, then exits immediately. I'm seeing a pattern here...
It turns out that there's one missing piece here, and that is to create an open-session channel. We'll get into the reason why in the next subsection, but for now, let's push on with some more code.
```rust
use russh::{ChannelMsg, Disconnect};
use tokio::io::{stderr, stdout, AsyncWriteExt};
#[tokio::main]
async fn main() -> Result<()> {
// ... snip ...
let mut channel = session
.channel_open_session()
.await
.with_context(|| "channel_open_session error.")?;
println!("Created open session channel.");
let mut stdout = stdout();
let mut stderr = stderr();
let code = loop {
let Some(msg) = channel.wait().await else {
return Err(anyhow!("Unexpected end of channel."));
};
match msg {
ChannelMsg::Data { ref data } => {
stdout.write_all(data).await?;
stdout.flush().await?;
}
ChannelMsg::ExtendedData { ref data, ext: 1 } => {
stderr.write_all(data).await?;
stderr.flush().await?;
}
ChannelMsg::Success => (),
ChannelMsg::Close => break 0,
ChannelMsg::ExitStatus { exit_status } => {
channel
.eof()
.await
.with_context(|| "Unable to close connection.")?;
break exit_status;
}
msg => return Err(anyhow!("Unknown message type {:?}.", msg)),
}
};
println!("Connection closed with code {}.", code);
session
.disconnect(Disconnect::ByApplication, "", "English")
.await
.with_context(|| "Failed to disconnect")?;
Ok(())
}
```
There's a lot going on in this one. First we create a channel with `session.channel_open_session()`, then we set up a `loop` that will read every message that we eventually get through this channel (by reading it with `channel.wait().await`). We have to handle each message type appropriately. Then, once the channel closes, we try to close the session cleanly if possible.
When we run it this time, we expect it to simply exit after printing the
```
Press Ctrl-C to close the session.
Starting SSH Forwarding service for http:80. Forwarded connections can be accessed via the following methods:
HTTP: http://test.sish.top
HTTPS: https://test.sish.top
```
Wait, the session is actually persisting?! And we even got some data from sish in the process!
When we created the channel through `session.channel_open_session()`, the server automatically knew that it could send us information through it. It is just a two-way tunnel where every message is secure, so we can read data, and even send some back if we want (for example, with [`stdin`](https://en.wikipedia.org/wiki/Standard_streams) if we're making a terminal application).
That's cool and all, but more important is how it gave us a URL for our service with automatic HTTPS, even! I wonder what happens if I try to open that link in my browser...
...
...And it just times out after a while with "bad gateway", causing our SSH program to exit.
Well, that's less exciting. But at least it's doing _something_. Besides, if we never touch the link that it passed us, we can stay connected indefinitely. And as soon as we open the link on any device, we consistently get disconnected from the SSH server. That's proof that there's a relation between what the browser sees and our little Rust program.
The reason why we get disconnected is that we aren't handling any requests that come in. Right now, there's no way to read HTTP requests, even less sending HTTP responses.
But I thought `channel_open_session()` was already doing that? Not really it only serves to transfer messages between the client and the server. Instead, to handle each new connection, we need to use a new channel.
Sounds simple enough. Then how do we create these channels? The answer is also simple: We don't.
### Changing channels
At this point, it's worth taking a detour from all the code and explain how an SSH session actually works.
[RFC 4254](https://datatracker.ietf.org/doc/html/rfc4254) is the document that dictates how SSH connections are supposed to work on a higher level. There are some interesting details, but most importantly for us is the ["5. Channel Mechanism"](https://datatracker.ietf.org/doc/html/rfc4254#section-5) section:
> All terminal sessions, forwarded connections, etc., are channels. Either side may open a channel. Multiple channels are multiplexed into a single connection.
In other words, a connection can have multiple channels, each responsible for a part of the system. This explicitly includes forwarded connections, such as the ones we are looking for.
Even more so, either side can open channels. Earlier, we opened one with `session.channel_open_session()` from the client-side, but the server is also allowed to open them if necessary.
Specifically, we see that in the ["7.1. Request Port Forwarding"](https://datatracker.ietf.org/doc/html/rfc4254#section-7.1) section, the mechanism for requesting a port is specified. It's exactly what we're doing right now, using `session.tcpip_forward()` and what not.
In the next section, ["7.2. TCP/IP Forwarding Channels"](https://datatracker.ietf.org/doc/html/rfc4254#section-7.2), the expected behavior of the server is explained:
> When a connection comes to a port for which remote forwarding has been requested, a channel is opened to forward the port to the other side.
So a new channel is being created and opened _for_ us. The channel is labeled `forwarded-tcpip`, which corresponds with the `server_channel_open_forwarded_tcpip()` method of `russh::client::Handler`.
Previously, that `Handler` only had one defined method by our `Client` struct, which accepted all the key fingerprints that the server provides. So we gotta add a second one to handle any received forwarding channels.
Remember, forwarded connections are channels, so we can use them just like the channel we get from `session.channel_open_session()`. And thankfully, as you'll see, Tokio has some utilities to make their usage trivial for our case.
### Back in session
If I understood the documentation correctly, then we should be pretty close to actually getting something to the HTTP endpoint! We just need to create an HTTP server on our side to handle everything for us, then connect it to the data channels that we receive from the server.
First, let's make the simplest HTTP server with global state that we can:
```rust
use std::sync::{
atomic::{AtomicUsize, Ordering},
Arc, LazyLock,
};
use axum::{extract::State, routing::get, Router};
#[derive(Clone)]
struct AppState {
data: Arc<AtomicUsize>,
}
/// A basic example endpoint that includes shared state.
async fn hello(State(state): State<AppState>) -> String {
let request_id = state.data.fetch_add(1, Ordering::AcqRel);
format!("Hello, request #{}!", request_id)
}
/// A lazily-created Router, to be used by the SSH client tunnels.
static ROUTER: LazyLock<Router> = LazyLock::new(|| {
Router::new().route("/", get(hello)).with_state(AppState {
data: Arc::new(AtomicUsize::new(1)),
})
});
```
If you're unfamiliar with Rust's [synchronization primitives](https://doc.rust-lang.org/std/sync/index.html), this may be a bit hard to read. But all this does is create a lazily-evaluated HTTP server on `ROUTER` that responds to each subsequent request with a global counter (starting on 1, 2, 3, and so on).
Remember that our goal is to serve this router to any channels received through `server_channel_open_forwarded_tcpip()`. If we were in the C world, we'd need to reference the channel directly by its ID but in `russh`, a struct representing the channel is already given to us, abstracting that detail away and avoiding any mistakes on the programmer's part.
In order to connect the `ROUTER` and the channel together, we'll turn the provided SSH channel into a [stream](https://tokio.rs/tokio/tutorial/streams), then use some magic with Hyper and Tower to be able to serve HTTP responses as if that stream were a TCP socket:
```rust
use hyper::service::service_fn;
use hyper_util::{
rt::{TokioExecutor, TokioIo},
server::conn::auto::Builder,
};
use russh::{
client::{self, Msg, Session},
ChannelMsg,
};
use tower::Service;
#[async_trait]
impl client::Handler for Client {
// ... snip ...
#[allow(unused_variables)]
async fn server_channel_open_forwarded_tcpip(
&mut self,
channel: Channel<Msg>,
connected_address: &str,
connected_port: u32,
originator_address: &str,
originator_port: u32,
session: &mut Session,
) -> Result<(), Self::Error> {
let router = &*ROUTER;
let service = service_fn(
move |req| router.clone().call(req));
let server = Builder::new(TokioExecutor::new());
tokio::spawn(async move {
server
.serve_connection_with_upgrades(
TokioIo::new(channel.into_stream()), service)
.await
.expect("Invalid request");
});
Ok(())
}
}
```
And with only a couple lines of code, we're able to glue our axum router to the SSH channel!
If we now run our program again and open the link another time, we see in our browser:
<figure>
<Image src={imageRusshAxum} alt="A screenshot of a browser with the text 'Hello, request #1!'." />
<figcaption>We get our message back!</figcaption>
</figure>
In fact, that's all we needed to make our application work with sish! If I refresh the page or open it in a different window or device, the global counter will increase and greet the user with a new message.
And, as you may have noticed, we never used a single TCP socket, other than to connect to the SSH server. But unlike with `ssh -R`, we've never had to bind one on our side. We can use a tunnel for hole-punching without even having a socket to punch-through to!
But then, why does the SSH client require that you specify a numbered port for remote forwarding if it's unnecessary? I imagine that the reason for it is convenience. It's easier to map one socket (the remote one) to another (your local one), and ends up being what you want to do in the majority of cases anyway.
However, you can see that the underlying SSH protocol is not too complicated, at least through the interfaces it exposes. We only had to write less than 200 lines of Rust code, and we're already doing things that the regular SSH client can't alone.
To summarize, this is what the code does:
1. Starts a connection with the SSH server.
2. Authenticates via public key.
3. Requests TCP/IP forwarding for a specific host and port (normally `localhost:80`).
4. Starts an open session to send and receive control data.
5. Upon receiving a newly-created forwarded TCP/IP channel, uses our router to interact with said channel as if it were a TCP socket.
If you want to see the final code, with some additional functionality for running on [localhost.run](https://localhost.run) (by using a PTY session instead of an open session), you can check out the [russh-axum-tcpip-forward](https://github.com/BadMannersXYZ/russh-axum-tcpip-forward) repository on GitHub.
## Painting the bigger picture
But a simple HTTP server isn't that interesting by itself, even though it's running over SSH instead of a socket. Can we do better?
Yes, we can. We get all the nice HTTP features that we'd expect, including support for [WebSocket](https://en.wikipedia.org/wiki/WebSocket) but that's beyond the scope of this post.
What I'm more interested in is pushing the limits of this solution in terms of simple HTTP. And since I was just starting to learn [htmx](https://htmx.org/), it seemed like the perfect opportunity to put it to the test.
At first, I made a simple application that stores data for a bunch of checkboxes, then updates them when you click them, and periodically polls the server to grab modifications done by other users. It was a dumb but easy idea to implement, but I didn't stop there.
<figure>
<Image
src={imageCheckboxes}
alt="A screenshot of a browser with a Web 1.0-looking picross or nonogram grid, entitled Multipaint by Numbers, and containing instructions on how to play, as well as multiple colorful cursors."
/>
<figcaption>Running a poor man's clone of One Million Checkboxes.</figcaption>
</figure>
Seeing the checkboxes getting filled and emptied in a grid reminded me a lot of [nonograms](https://en.wikipedia.org/wiki/Nonogram). You might know them by "picross" or "paint by numbers", or not know them at all, but they are a kind of puzzle made by filling cells in a grid in order to reveal a pixelated image. So I thought, why not make a multiplayer version of it? And I called it Multipaint by Numbers, and worked on it over the next several days.
<figure>
<Image
src={imageMultipaintByNumbers}
alt="A screenshot of a browser with a Web 1.0-looking picross or nonogram grid, entitled Multipaint by Numbers, and containing instructions on how to play, as well as multiple colorful cursors."
/>
<figcaption>A screenshot of me playing Multipaint by Numbers together with myself.</figcaption>
</figure>
It's a janky mess, sure, but it definitely works! It has click-and-drag, it has flagging, and it even has cursors that (try to) match those of other people currently playing as well! And I'm the first to admit, it definitely feels buggy (rather than _actually_ being buggy) and unresponsive, since HTMX wasn't designed for highly interactive applications like this one. But it was quite a fun learning experience! If you've dabbled in full-stack development, I highly recommend checking out HTMX if you haven't compared to JS, it's like a breath of fresh air.
But if you just wanna play it yourself, Multipaint by Numbers is available to play on https://multipaint.sish.top, and you can find the source code [here](https://github.com/BadMannersXYZ/htmx-ssh-games).
## Awaiting the future
Of course, none of these projects do anything special. It'd be better to just make them as plain HTTP applications, after all. Why go through such a roundabout way to make webapps?
But there was a reason. Both "400 Checkboxes" and "Multipaint by Numbers" were just toy projects to learn the basics about russh, Axum, and HTMX. And I was hoping to go in a new direction with this knowledge.
Recall from my previous post the motivation for looking at SSH reverse port forwarding, in the first place: I wanted to expose services from my home network that would otherwise get blocked by NAT.
This ties in with an idea I've had for a game for a while. It's supposed to run on your computer, but is controlled remotely through the phone (or a separate browser window), with interactivity that connects and synchronizes both ends. They could be running on the same network, but maybe people use their phone on cellular data, therefore having a completely different [ASN](<https://en.wikipedia.org/wiki/Autonomous_system_(Internet)>) backing it up.
Well, what if you could connect your phone to your computer without worrying about NAT? What if it was as simple as accessing a web page? What if the phone interactions were as simple as touching buttons in a mobile-first webapp?
If you've played any of the [Jackbox Party games](https://en.wikipedia.org/wiki/The_Jackbox_Party_Pack), you're already familiar with this kind of architecture (and it was one of the inspirations for this idea!). The only difference is that a single device will connect to the game, instead of multiple players.
On the other hand, if you are familiar with [WebRTC](https://en.wikipedia.org/wiki/WebRTC), you might be thinking that this isn't so different from a [TURN server](https://webrtc.org/getting-started/turn-server), and it's definitely similar! But for my project, I think that an SSH-based solution might work better:
- Setting up a new sish instance for my project is not very complicated, whereas WebRTC makes me want to pull my hair.
- I was already planning on using HTML for the mobile interface (instead of, say, [a native app](https://aws.amazon.com/compare/the-difference-between-web-apps-native-apps-and-hybrid-apps/)), so a hypermedia-driven library like HTMX may suit my needs better than translating the plain data that WebRTC sends.
- However, it'd still require some JavaScript on the mobile end of it, for things like [rollback netcode](https://en.wikipedia.org/wiki/Netcode#Rollback).
- SSH already comes with built-in authentication and encryption mechanisms, meaning that I wouldn't have to roll out my own. (In fact, the people behind sish and tuns.sh leverage this feature of SSH plus _forward_ TCP connections to create [tunnel-based logins for services](https://pico.sh/tunnels).)
- The dependency on SSH is transparent, letting me work on the communication channel as if it were a plain web server or if it were any other application, for that matter. There is no lock-in to a specific technology like there is with WebRTC.
- Since I plan on having a web interface on the mobile device anyway, this scheme avoids adding extra logic for a separate web server. The sish proxy will only handle upgrading our HTTP connection to HTTPS essentially, and the web server can be embedded on the computer application, similarly to a [thin client](https://en.wikipedia.org/wiki/Thin_client).
With that said, there's nothing inherently wrong with WebRTC though (other than it being [a complex mess of protocols](https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Protocols)), and I'm not dismissing it straight away for this project.
Our chosen path still has disadvantages too, one of them being that I'd be forced to use [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) for communication. But since having a web interface on the remote controller was already part of the initial idea, that'd be unavoidable even if I picked WebRTC for the project. Another challenge is the added overhead of the proxy server, but with proper latency-based rollback, this can be mitigated and isn't so different from what would happen with a TURN server, really.
One final bonus that we get over a traditional client-server architecture is keeping the responsibilities as they should actually be. Normally, this kind of game would require a central server that coordinates with two or more clients the computer running the game, and the mobile phone(s) running the controller. With remote port forwarding, the computer **will** be the server, exposed directly through the Internet. The mobile phone will be a regular client of that server, and there's no opaque abstraction over their communication other than HTTP itself.
I've had this idea for a while now, but I was struggling to make it work with a traditional server-side architecture. It turns out that I don't need to implement anything myself. sish is configurable enough that it can serve many purposes, be it hosting multiple services, or managing multiple game connections. And for my project, it's definitely a viable solution that I'll look more into.
But for now, that's all I have to say about it. I hope that this blog post has given a good insight about the inner workings of SSH, and perhaps even gave you ideas to try out yourself!