[kwlug-disc] deno as next cluster engine?

Mikalai Birukou mb at 3nsoft.com
Sat Nov 7 11:24:05 EST 2020


<Saturday thoughts>


deno ( https://deno.land/ ) runs code only when explicit permissions are 
given.

At high level of abstraction, this is what Docker does: running 
processes in some sandbox. When docker volume is a bind to directory, it 
is essentially a particular permission in deno.

Can deno be next cluster engine like Docker is in Swarm or Kubernetes?


There is development of WebAssembly and WASI. Reportedly, the Docker guy 
said that they would've gone with WASI as a way to contain processes.

Fastly is using WASI in its "platform for edge computing": 
https://www.fastly.com/blog/announcing-lucet-fastly-native-webassembly-compiler-runtime

deno uses V8, V8 has WebAssembly engine -- look, there is WASI in deno.


Thinking about "edge computing", I already have a cloud of personal 
compute devices around. Personal cloud, or junk, if it isn't used.

Rhetorical question here would be "Is it feasible to run Docker Swarm on 
these devices, or Kubernetes?".

On another hand practical question is "Can I run single binary deno, 
needing no root privs?".

Imagine there is a cluster that you can drop on dusty junk, and it 
becomes a cluster. Should than the simple tool be used in server room?


When I write a stack file for Docker Swarm based application, I 
pedantically define networks that go between specific services only, so 
as to limit exposure. Yet, when I administer the swarm cluster, these 
networks don't exist.

Why then, as a developer I am thinking about transport choice for 
messages between services? Is it possible that I choose tcp/ip where 
direct unix socket is a faster option?

I suggest to forget about networks inside application that runs on a 
cluster. Processes in cluster get application-specific privileges like 
"msg_to_accounts".

As a developer I may only need to package my proto buffers and give it 
to injected capability function in JS and WASM, or put it into unix 
socket in a legacy app wrap (docker?). All that I may ever need to pay 
attention is if particular communication expects same instances of 
service on another end, cause construction of app defines this, as 
developer define this, consciously or not.

Cluster should choose local or remote transports for byte buffers. 
Cluster chooses instances, doing a load balancing, and has info at this 
point to dump into Netdata sensor.

Time distance between instances and their load can be interesting 
factors for routing, and may be for constraints on placement of processes.


What other fundamental considerations should be considered before 
hacking this up?

What is your Kubes', Docker, Lambda, etc. experience?





More information about the kwlug-disc mailing list