While I've got a big 🧵 going on Twitter for the mirror.fcix.net project @warthog9 and I are working on https://twitter.com/KWF/status/1509276966704218113
I figured Mastodon would be a fitting place to put the thread for the other half of the same project, where John and I are building a fleet of "Micro Mirrors" to help #linux distributions continue to operate for free.
So we're building mirrors and then passing the hat around to fund this silly little project in exchange for entertainment. paypal.me/kennethfinnegan
As part of the MFN (mirror.fcix.net) project, John built an amazing Grafana/influx telemetry system that takes every HTTP request in the logs and parses it out into what project, which release, what ASN, and how many bytes the request was, which is giving us an amazing level of visibility into what's going on with that large software mirror.
This lead to the realization that I'm now able to calculate what I'm calling the "CDN efficiency" of every project, which is the number of bytes served per day divided by the number of bytes used on disk.
Look up the day's "bytes served" and divide by the output of "du -h -d 1 /data/mirror"
And we get numbers like the following:
So for example, the 1.97 CDN efficiency for EPEL means that for the 269GB used on our mirror to host that project, we're serving 1.97 times that (530GB) per day.
Which lead to the realization that even though we're using almost 40TB on MFN to host projects, something like 60-70% of our actual served bandwidth is coming from only the most popular 3TB worth of files.
So while these big heavy-iron mirrors are needed and valuable for a foundational capacity and the long tail of projects, what if we added more mirroring capacity and distribution by building additional mirrors which are:
* easier to host
And scatter them around in more places on the Internet to combat the consolidation that seems to be pervasively happening across all strata of the Internet?
Enter the concept of the Micro Mirror. https://github.com/PhirePhly/micromirrors
What if we built a tiny #linux Micro Mirror that only had 2TB of storage in it and only served a few of the smallest and most popular projects?
Not as a replacement for the monolithic heavy iron mirrors, but as a way to take some of the hottest load off of them so the heavy iron mirrors can spend more of their time doing what they're good at, which is serving the long tail of the less popular content?
So for our initial proof of concept, we're using the HP T620 thin client as our base, since these are cheap and plentiful on eBay, low power, fanless, and I literally stock a pile of them in my apartment for projects where I need servers with N→0 horsepower.
Two DDR3 SODIMM slots, an M.2 SATA slot, an mPCIe card, a 1GbaseT NIC, and it uses <15W of power from a 19V soap on a rope power supply.
2x4GB RAM sticks and a 2TB WD Blue SSD, and we're able to put together the whole box for <$250
So for less than the cost of a single of the 16TB hard drives used in MFN, we're able to build a Micro Mirror.
The big question that we're working to answer now is whether this is a more or less effective use of resources to support Linux software distribution.
Build another monolith vs build 10 more micro mirrors to shed load off the existing monoliths.
Initial benchmarks on the T620 hardware look very good. It can read and write to the SSD at 3-4Gbps, so the amount of RAM in the MicroMirror beyond "enough to operate the control plane" doesn't matter anywhere near as much as on a spinning rust mirror, because shoveling content from SSD to 1Gbps Ethernet is effectively free.
The 4 core GX-415 CPU in these thin clients is able to handle serving line rate small HTTPS requests out the NIC without even maxing a single core.
Which I think is both a testament to how freaking powerful 15W TDP CPUs have gotten and how battle hardened and optimized software stacks like Nginx are for performance.
Not usable as an end user desktop running a web browser, but able to handle updates for a million servers per day. 🙄
So we can now build the hardware and manage the instance, and for each server we just need to find an ISP with 1Gbps of surplus egress transit bandwidth and the lack of manpower/interest in putting in the effort to manage their own server.
They give us an IP address, space, power, and then can just forget about this thing because it's smol and in the absolute worst case is only pumping out 1Gbps of what can be treated as entirely scavenger class traffic below their customer data.
So our initial alpha deployment is sitting two racks down from my AS7034 Phirephly Design network in Fremont and hosting six projects on a 2TB SSD. https://codingflyboy.mm.fcix.net/
We're still working on the cosmetics of it so it doesn't look like a default Nginx install; turns out the autoindex formatting mangles .xml files getting served from disk 🤦
To keep it interesting, we've added the rule that we only deploy one micromirror per building, so HE FMT2 gets checked off the list.
CodingFlyBoy is now live for epel, opensuse, and arch linux, putting it at 3/6 projects live.
My gut feeling is that the sweet spot for these micro mirrors is going to be about 100Mbps of baseline load, so we have 900Mbps of burst capacity for individual requests / something happening like a release day or a CVE.
So if we end up with this thing running with its NIC at >20% capacity ALL the time, I think we overshot and need to put fewer projects on each Micro Mirror.
I also think it's a good demonstration of how the MM with 8GB of RAM really are leaning on the fact that they're entirely SSD based.
This micro mirror with three projects is reading more from disk (left) than our big iron mirror with 384GB of RAM (right).
I so badly want to do something cute like hack a 2TB laptop spinner into these T620s, but it just doesn't make sense for how hot the entire working set is on these small mirrors.
@kwf depending on the analysis you want to do, negative binomial might be better than Poisson (cf. https://www.johndcook.com/blog/2009/11/03/negative-binomial-poisson-gamma/ and links from there). It isn't memoryless, so recursively reasoning about request counts gets harder, but it might be better as a distribution for a predictive model (because the variance and mean are no longer the same parameter, and you can think of it as a mixture of Poisson distributions). You could even consider https://en.wikipedia.org/wiki/Conway%E2%80%93Maxwell%E2%80%93Poisson_distribution, as suggested in the comment.
@kwf it sounds like one thing you might gain out of this project (just looking at your thread here) is factoring apart the load so that the more popular subset of files is more widely available on a custom CDN of sorts. Depending on the over- or underdispersion of the distribution of HTTP requests, a simple model using the Poisson distribution may be more or less suitable.
@kwf It isn't clear to me, however, if you really need a super clever model for the overall distribution of HTTP requests. Maybe that's so because the right model would help you factor the distribution in a way that shows you how to break things apart onto micro mirrors.
This leads to a question: maybe once you have split things up onto micro mirrors, the load on each mirror would be more Poisson-like, or exhibit underdispersion. What would you expect or want to see?
@trurl I have no idea what I'm looking for.
We're focusing on the operational and political problems for micro mirrors. Numeric metrics for success criteria for them is a whole second ballgame.
"I appreciate SDF but it's a general-purpose server and the name doesn't make it obvious that it's about art." - Eugen Rochko