index.html (35881B)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta property="og:title" content="On not scaling LURK: a tale of
maintenance, federation, and governance" />
<meta property="og:description" content="Ruminations on scale, technology and sustainability and their consequences for how lurk.org is run" />
<meta property="og:image" itemprop="image" content="https://txt.lurk.org/on-not-scaling-lurk/taSj1BprSmeaHqBqoKOS6Q.jpg" />
<meta property="og:url" content="https://txt.lurk.org/on-not-scaling-lurk/" />
<meta property="og:type" content="website" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="On not scaling LURK: a tale of
maintenance, federation, and governance" />
<meta name="twitter:description" content="Ruminations on scale, technology and sustainability and their consequences for how lurk.org is run" />
<meta name="twitter:image" content="https://txt.lurk.org/on-not-scaling-lurk/taSj1BprSmeaHqBqoKOS6Q.jpg" />
<meta name="twitter:image:alt" content="Capybara with a hat says 'too rad to be sad'" />
<title>On not scaling LURK: a tale of maintenance, federation, and
governance</title>
<style type="text/css">
@font-face {
font-family: 'Route159-SemiBold';
src: url('../Route159-SemiBold.woff') format('woff');
}
body{
margin:40px auto;
max-width:650px;
line-height:1.6;
font-size:20px;
color:#444;
padding:0 10px;
font-family: 'Route159-SemiBold';
}
a:hover, a:visited {
color: #7800FF;
word-wrap:anywhere;
}
h1,h2,h3{
line-height:1.2;
}
figcaption {
text-align: center;
}
#ecf {
padding-top: 6em;
display: block;
margin-left: auto;
width: 25%;
}
img {
width: 100%;
margin: auto;
}
@media only screen
and (max-device-width: 400px)
and (orientation: portrait) {
body {
margin: 1em 2em;
font-size: 14px;
line-height: 1.4;
font-family: sans;
}
figure {
width: 100%;
margin: auto;
}
}
@media only screen
and (max-device-width: 650px)
and (orientation: portrait) {
body {
margin: 1em 2em;
font-size: 16px;
line-height: 1.4;
}
blockquote {
margin: 0 auto;
border-left: 2px solid #444;
padding-left: 1em;
}
figure {
width: 100%;
margin: auto;
}
}
</style>
</head>
<body>
<h1
id="on-not-scaling-lurk-a-tale-of-maintenance-federation-and-governance">On
not scaling LURK: a tale of maintenance, federation, and
governance</h1>
<p>This text is important. It’s also quite long. There is no tl;dr.
But there are some good memes.</p>
<p>It is both long and a long time in the making as it reflects some
of our internal discussions that started in November 2022, when Elon
Musk completed his purchase of Twitter, and suddenly many people
found their way to the Fediverse. This moment changed things for the
Fediverse and also changed things for LURK. In the interest of
accountability and transparency, we want to share our reflections on
the matter with you all.</p>
<p>Let’s start with the numbers. In November 2022, we welcomed 50
new people to the instance. Moreover, many who had created an
account in prior years (we started the instance in 2018) also came
back to hang out more, now that the incentive to flee away from
Twitter became more pressing. To put these numbers in perspective,
over the years, post.lurk.org had around 150 people active at any
one time. That week the number jumped to 350 and has hovered around
300 since then. Hi everyone. Of course, LURK runs more than a
Mastodon instance, and if we take this into account, the total
number of LURKers is more in the few thousands. However, the LURK
instance is the place where the most visible interaction is taking
place, and has become an entry point to the other things we host in
many situations.</p>
<p><img src="olUPjfz5SHi-HXdN0x7kNw.webp" /></p>
<p>We’re happy that many of you are trusting us, and consider this
instance to be their new home base, however, and like many other
Mastodon instances who suddenly grew, the increase in active users
both on our instance and across the fediverse meant that the
workload of our server increased significantly. Simultaneously, this
change in active accounts, both old and new, changed the vibe of our
local instance. Several instance admins have written blog posts on
how they scaled their technical infrastructure in response<a
href="#fn1" class="footnote-ref" id="fnref1"
role="doc-noteref"><sup>1</sup></a> and this is our post explaining
why we didn’t. To be sure, for post.lurk.org, we also put in many
hours of technical work to better accommodate new folks and the
growth of the wider fediverse. During this period, some probably
remember how we were testing all sorts of things and how this was
impacting the instance for better or worse. We even had to ask one
of our hosts, Eclips.is, to kindly expand a bit the capacity of the
server where we run Mastodon. This was not to accommodate for even
more users like many other admins were doing at the time, not at
all, in our case it was to be able to <em>just</em> keep the
instance afloat with its new 350 active users! Finally, things got
stable, and somehow post.lurk.org is considered a nice instance from
outside. This translates in receiving many requests to join the
instance, even though it’s explicitly closed and we are actively
declining these. Why? The temptation to scale things up further
could seem obvious. More LURKers the better? Well, not quite. For
us, not scaling up was always the obvious choice for the
sustainability of the project.</p>
<p><img src="RFbXd8pXRCKW8qkxrIOeXg.webp" /></p>
<p>Specifically, there are three interrelated ways in which the
issue of sustainability comes to play for post.lurk.org. First,
long-term sustainability of the project itself. Second, financial
sustainability of the project. Last but not least, ecological
sustainability of the project. All three concerns are interrelated
and have been actively guiding us until now and will hopefully keep
on guiding us going forward. These in turn touch on how to provide
access to the instance in the future, how we will maintain the
server, and what we do with the threat of Threads, and more
generally in a computational landscape in which big tech openly
deploys and support more and more damaging and toxic technological
choices.</p>
<p>In terms of long-term sustainability, the growth of the space is
a consideration, and in particular the change in social dynamics
that occurs during moments when many new folks (re)join
post.lurk.org as a result of something happening on Twitter (or
whatever it’s called these days). That change is rooted in the
tension between providing friends (and friends-of-friends) a space
to network in a rich and focused environment, <em>and</em>
maintaining that environment. On the one hand, we want to give to
many the possibility to join post.lurk.org and the wider Fediverse,
on the other hand, there is only so much that we can do as a small
collective to make a wider transition happen. Culturally speaking,
we also want to sustain the vibe of the space we have been creating
while keep trying to figure out what the Fediverse is about<a
href="#fn2" class="footnote-ref" id="fnref2"
role="doc-noteref"><sup>2</sup></a>. Throughout the years, our slow
growth through invites and the word of mouth of friends-of-friends
has helped with maintaining that focus and a pleasant environment.
But in times of crisis, like in November 2022, many people needed a
new home and, of course, this has an impact on the experience of the
instance. So what to do? Well, the bottom line is that there is only
so much we can, and… honestly… want to do.</p>
<p><img src="iFfov8BmSq21IIPSNJtDFg.webp" /></p>
<p>Since the start, we tried to focus on quality over quantity. This
has meant that we try to maintain a healthy diversity—across
genders, creative practices, cultural backgrounds—rather than aiming
at opening the door to a large number of people vaguely connected or
interested in digital media and cultural practices. This is to a
large extent because we do this for the sake of it and in our spare
time, so we want this to remain an interesting place and hub for
communities of practice that inspire us, rather than a chore. At
times, and since last November 2022, particularly that has meant
that we need to engage the brake on new sign-ups to be able to make
sure that this sentiment keeps on being shared by everyone. At the
same time, this means we have had to exclude some folks, who, as a
consequence, felt left out. This sucks, we know it sucks, and it’s
the consequence of refusing to scale.</p>
<p>Nevertheless, we feel that not-scaling has paid off. For us, one
of the great things about having been involved in post.lurk.org is
the quality of the space and how generative it has been for our
practices versus how little time we, as an admin team, have to put
into it to keeping it running. That is something we want to maintain
and something that is at risk when scaling the instance. Still, we
want a mechanism where people can join post.lurk.org. After all,
even if we want to not grow, there is also the fact that some people
join and eventually leave, some join and never use their account and
some simply disappear into the ether after a while. That’s cool. But
this means that we could potentially welcome new people
occasionally, without compromising on our way of running the
instance.</p>
<p><img src="3DCjoMkpReeZYDijQbP_4w.webp" /></p>
<p>Until now, our onboarding was ad-hoc. Opening applications every
now and then, and receiving suddenly waves of messages from people
explaining to us why our instance is meaningful to them. Filtering
these applications is one of the most unrewarding and stressful
things about this approach, all the while having to make important
but also, at times, arbitrary selections. Part of the issue is
because of the crappy interface for selection—there is no
possibility to respond to an application outside accept or reject,
for instance—but a larger part is based on the arbitrariness of it.
The secret LURK truth is more often than not, people we found
exciting based on their application turned out to not be super
engaged (if at all), and likewise, people we let in on a whim have
become some of the nicest LURKers! Of course, we’re not naive, and
this is a social process that is not that surprising in community
building. The point is that we feel that the application method is
not only stressful, but also doesn’t add anything to existing social
processes emerging in online communities, so let’s try something
else.</p>
<p>One of the decisions we made in November 2022 is to cap the
number of accounts on post.lurk.org at 666 (keeping with our
tradition of using Meaningful Numbers™). The past years we stuck
with that and it has felt pleasant. And here is the plot twist, in
the very near future, we will automatically remove unused accounts.
We will warn (of course!) accounts that have not <em>logged in</em>
for 12 months and delete them after 13 months of inactivity. This
allows more people to join post, and automatically and slowly open
up new spots for others to join, as people lose interest or move on,
which is fine really (please send postcards, though!). We will hand
out invites to you if you request them and have been around and
active for a while, but we <em>really</em> still want to privilege
both diversity <em>and</em> people that are not yet on the fedi. You
will be for sure lectured about it once more by the Helpful Heron
when you ask for an invite URL! Really. It’s an experiment, and we
will see how it goes, and evaluate things after a while.</p>
<p><img src="thecycle-small.webp" /></p>
<p>It’s also important to say that, next to running the LURK
instances and its other services, we are also active developing and
offering workshops for communities to onboard the fediverse, not
just as users of an existing instance, but as collective
administrators of their own instance <a href="#fn3"
class="footnote-ref" id="fnref3"
role="doc-noteref"><sup>3</sup></a>. And this is really the key
thing that cultural practitioners need to understand about
decentralised and federated social media, namely the potential of
having a balance between online communities of practice that are
humanly scaled, and still be able to connect and reach out to many
many many others. For instance, very recently, it was exciting to
see the new <a
href="https://social.toplap.org">social.toplap.org</a> instance
emerge to give a proper hub for live coders, who until now tended to
flock to LURK or similar places where algorithmic and software art
is welcome (like <a
href="https://merveilles.town">merveilles.town</a> and <a
href="https://sonomu.club">sonomu.club</a> instances). Running your
own instance is not trivial, but it’s not impossible for a small
group of motivated people, as we’ve seen in our workshops. And this
instance mitosis is the kind of scaling we’d like to see more happen
on the Fediverse instead of the emergence of unsustainable and large
instances.</p>
<p><img src="eU8gcyIPReOL-nGS2uTJlQ.webp" /></p>
<p>As mentioned above, we do this for the sake of it, and, outside
some flurries of work on technical things or moderation issues, it
has been fairly easy going. We want to keep it this way and are
really keen on none of this becoming a <em>Work</em> or a
<em>Chore</em>. Last year Brendan, both a long-time friend and an
experienced hater of computers, joined the team to help out. He has
been a great help with gnarly technical stuff. Others have
approached us offering help in various ways, for instance with
moderation, which has been useful with the current state of the
world. Others, however, have also approached us to help with means
of becoming larger, more professional, and we kindly rejected those
offers because at the end of the day, that means more meetings and
whatnot… and <em>Work</em>. What /works/ for us is to stay haphazard
and spontaneous, the way we’ve been operating hitherto. We have an
idiosyncratic way of working, a weird governance model so to speak,
and we like it despite its highly artistic take on administration
and questionable technological apparatus<a href="#fn4"
class="footnote-ref" id="fnref4"
role="doc-noteref"><sup>4</sup></a>. In the context of the ATNOFS
project in 2021 we did some introspection and came up with an honest
description of such a take:</p>
<blockquote>
<p>“Specifically in terms of governance, while it might be seductive
to go for a democratic consensus-governance model, this can also be
a risk when it comes to starting out and establishing the space if
the group doesn’t have enough capacity. In order to highlight this,
we introduced an honest description of LURK’s governance model as an
“impulsive and time-constrained benevolent eurocentric
oligarcho-do-ocracy”. Deconstructing what this means: our governance
model is impulsive because scratching itches / personal enjoyment
are the main motivators for work on LURK. Time-constrained because
everything is done whenever the administrators / moderators find
free time to work on the server; TODOs tend to span months, unless
they happen to be scratching someone’s itch. Benevolent, as we like
to consider ourselves well-intended, and are willing to listen,
learn and do best efforts given our constraints. Eurocentric, as the
entire team is in one timezone, concentrated on four to five
languages, and culturally homogeneous. Oligarchy, as the governance
structure consists of a small cabal (a conspiratorial group) which
makes executive decisions. A do-ocracy, because decisions are made
primarily by people acting on something. Moderation decisions such
as accepting new people to the server, banning other servers etc.,
tweaking the technical configuration are often just “done” by those
within the oligarchy without prior discussion. Only very difficult
situations, non-trivial technical issues, or really large decisions
are actively discussed in the oligarchy. All of that does not imply
that we haven’t, for example, solicited input and feedback on things
such as the Terms of Service to the larger LURK.org userbase.”<a
href="#fn5" class="footnote-ref" id="fnref5"
role="doc-noteref"><sup>5</sup></a></p>
</blockquote>
<p>Surely, there is an alternative timeline where LURK is run as a
super structured COOP using Loomio and whatnot to implement various
models of liquid democracy and participation, but, honestly, in our
present timeline, our model is not likely to change soon, and we
have the feeling that if we stick to this approach, we can stick to
it for the long run (by the way did we miss the opportunity to
celebrate LURK 10 year anniversary?<a href="#fn6"
class="footnote-ref" id="fnref6"
role="doc-noteref"><sup>6</sup></a>). Surely, we can improve and
tweak things, but, it’s nice to appreciate when something works well
enough and brings good feels. <em>SLAPS ROOF OF LURK</em>. To be
sure, participatory modes of governance are the way forward and our
position is by no means a critique of these. If anything, we are
strong believers of direct democracy models, such as participatory
democracy, deliberative democracy, and agonism. It’s just that LURK
is more of an artistic driven approach to long term community
building and server infrastructure, and we would rather not pretend
to be otherwise<a href="#fn7" class="footnote-ref" id="fnref7"
role="doc-noteref"><sup>7</sup></a>. With that said, as exemplified
with this wall of text, we are ruminating <em>a lot</em> on these
issues and our slow cooking is so slow that it’s probably more
accurate to describe it as fermentation. It took us 5 years to
figure out how to have a 3-in-1 Code of Conduct, Terms of Services
and Privacy Statement that, we felt, was strong enough. To reach
this point, we spoke both formally and informally with many other
LURKers and friends, but also learned from practice and from what
other community servers were doing .</p>
<p><img src="mC-4HGEvTjCMi-lvo4u07g.webp" /></p>
<p>Concerning financial sustainability, one of the ways we have been
receiving (and gladly accepting) a tremendous amount of support is
in terms of donations. We started an <a
href="https://opencollective.com/lurk">Open Collective</a> in 2021
and have been amazed at how people have chipped in. Because we are
small, frugal, anti-cloud and get some of our infrastructure
sponsored<a href="#fn8" class="footnote-ref" id="fnref8"
role="doc-noteref"><sup>8</sup></a>, we have historically spent very
little costs regarding infrastructure. The reason we started
collecting donations was to see if we could compensate for
maintenance labour instead, and hopefully demonstrate the value of
such a tactic at a time when Big Tech and a misunderstanding of open
forms of software production have led us to believe that the digital
commons and digital solidarity are a thing falling from the sky.
This is even crucial for us, as, like discussed earlier, we are
often helping other cultural workers to run things themselves and
pretending that the economic dimension does not exist is incredibly
dishonest. (Post-)Free culture evangelism has to stop sounding like
an obscure hypocritical pyramid scheme with only the most privileged
persons being the ones who are able to play the game. To our
surprise, soliciting donations has worked so far, and we have been
using the majority of donations to compensate for sysadmin and
moderation labour of the team. We believe we are one of the few
instances where donated funds are used primarily to pay people,
rather than cloud companies. Really.</p>
<p>However, we also realize that this can raise expectations on what
LURK as a project will become, and we want to be explicit that we
are not planning to change the nature and scale of our operation. We
will use the funds to continue to pay for labour, keep a buffer for
these moments where we suddenly need to fix something urgently. If
there is any surplus, we aim to donate upstream. This can be to
either Servus (who hosts one of our servers for free until now), or
some individuals and collectives developing software and projects we
use and like when working on LURK. We are still trying to figure out
how we will make it work and it will likely take a couple of years
before we have something that works. Fermentation. To be honest,
it’s difficult to get a clear idea of our operational expenses in
terms of labour, and as a result, how to best use the buffer. Before
asking for donations we spent two years carefully writing down all
the time we spend on maintaning LURK infra to get an idea of how
much labour would need to be supported. At the moment we’re still
juggling with things. For instance, we’ve now noticed that it only
takes a few days of technical or moderation clusterfuck for our
buffer to empty very fast. Also we recently learned that a small
commercial DC that offered us free hosting until now was going to
start charging us. Details are unclear but this adds another
parameter to consider in our optimistic plans. What is sure is that
your ongoing support in the form of donations will allow us to
continue this fermentation of community server<a href="#fn9"
class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a>
maintenance for the long term. <3</p>
<p><img src="sharecropping.webp" /></p>
<p>Last but not least, at the intersection of financial and
ecological sustainability is the question of technology use.
Sticking to the magic number of 666 accounts and operating with a
small team not only allows post.lurk.org to socially function well,
it also means that on a technical level, we don’t all of a sudden
have to become DevOps cloud engineers. Growing more would mean that
we will have to fundamentally reconsider how post.lurk.org is set up
and installed, and then start investing in cloud technologies and
platforms to keep things running. This is really something none of
us are looking forward to, or are even remotely interested in, let
alone supportive of, both in terms of the type of maintenance we
will have to do, how much it will cost, and finally also how it sits
ecologically. We think morally there should be a clear upper-bound
to how much the environment should suffer to facilitate shitposting.
From Low-Tech<a href="#fn10" class="footnote-ref" id="fnref10"
role="doc-noteref"><sup>10</sup></a> to permacomputing<a
href="#fn11" class="footnote-ref" id="fnref11"
role="doc-noteref"><sup>11</sup></a> and degrowth<a href="#fn12"
class="footnote-ref" id="fnref12"
role="doc-noteref"><sup>12</sup></a>, several of us on the admin
side of LURK are interested in different frameworks to
reconceptualize computing’s relation to the environment and that
practice is also expressed in how we run post.lurk.org. It’s also
great to see how this interest has drawn many who share the same
views to the instance, and are themselves active in these fields<a
href="#fn13" class="footnote-ref" id="fnref13"
role="doc-noteref"><sup>13</sup></a>. Currently, post.lurk.org runs
on a fairly limited setup on a more than a decade old machine. The
backup system likewise is made up of second hand and spare equipment
(hosted as encrypted blobs in apartments and under work desks). So
far, this has been workable, but unfortunately Mastodon has been
until now designed with an unlimited growth mindset. For instance,
Mastodon servers by default accumulate an ever-growing cache of
remote media. On the one hand, this is necessary to be able to
properly moderate, on the other hand, it relies on ever-growing disk
space, which is wrongly considered as a “cheap” and easy to access
commodity and therefore this is not considered a fundamental
issue.</p>
<p><img src="i2l56pBPRxKNGJ6FJabDkw.webp" /></p>
<p>One of the things we do on post.lurk.org to counteract this is to
frequently prune this cache on the server. That however, has some
implications: only the most recent remote posts are visible
instantly, and, remote profiles that haven’t been interacted with in
a while will not have avatars or profile headers. When we remove
remote users from the database that have not been active in a long
time, this can also mean that you lose followers. Or, to be more
precise, the “followers” counter will be suddenly lower, since you
likely already lost those followers as the remote accounts will have
stopped using the fediverse a long time before we remove them from
the cache. Having said that, things like favourites and bookmarks
are not deleted, and we also won’t delete your personal data (unless
your profile becomes inactive for longer than a year, and we send
you a warning before that).</p>
<p>The reason to discuss this is that, at the end of the day, it
also impacts the user experience, especially when the cloud mindset
of “everything at my fingertips forever” is the default. Some of you
use a feature of Mastodon to automatically delete old posts based on
some conditions. At the time of writing we haven’t really decided or
discussed seriously if it’s something we should encourage everyone
to do and if yes, what would be the default strategy as it can be
configured in many ways (<a
href="https://post.lurk.org/statuses_cleanup">have a look</a> to get
an idea of all the options!). Keeping things constantly online that
are essentially ephemeral, or low value, feels wrong since it uses
actual resources<a href="#fn14" class="footnote-ref" id="fnref14"
role="doc-noteref"><sup>14</sup></a>. If you need to keep an
archive, you can export it from the configuration panel and either
<a href="https://s427.github.io/MARL/">explore it</a>, <a
href="https://purr.neocities.org/">explore it some more</a>, or <a
href="https://codeberg.org/oliphant/posty">turn them in to a web
site</a>.</p>
<p>We want to mention this because one of the big unknowns at this
point is whether we can continue running the server as we have done
before as the entire network grows in size. For instance, one way
the network will drastically grow, is if/when Facebook’s Instagram’s
Meta’s Threads becomes fully interoperable.</p>
<p><img src="SE3W9YkGTreFRaMVgkF2vg.webp" /></p>
<p>In conclusion—and to answer a question that comes back every now
and then in our instance—this is also where these three strands
coincide in to our position on federating with Threads: it is weird
that volunteer mods and admins will have to put in effort to
maintain a connection to what essentially is a giant and badly
moderated server. Likewise, it is weird that small alternative
projects will have to drastically upscale their infrastructure,
labour and capital investment to facilitate a billion dollar
corporation’s regulation dodging/<a
href="https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish">EEE</a>.
It is weird that we will have to be decentrally storing all kinds of
random crap from a social media empire that follows a cornucopian
perspective on computing and actively incentivizes the production of
libertarian bullshit at the expense of people and the planet. We
appreciate that others might feel doing just that is sound
techno-political strategy; more attention for the alternatives etc.
The reason we got into to post.lurk.org is to get away from all that
and try something else. So no, we will not federate with Threads.
What is the point really?</p>
<p><img src="FToAO_ZXEAkqlCg.webp" /></p>
<p>Happy LURKing :^) <a href="https://post.lurk.org/@yaxu">Alex</a>,
<a href="https://post.lurk.org/@320x200">Aymeric</a>, <a
href="https://post.lurk.org/@praxeology">Brendan</a>, <a
href="https://post.lurk.org/@lidia_p">Lídia</a>, <a
href="https://post.lurk.org/@rra">Roel</a></p>
<p><img src="taSj1BprSmeaHqBqoKOS6Q.webp" /></p>
<section id="footnotes" class="footnotes footnotes-end-of-document"
role="doc-endnotes">
<hr />
<ol>
<li id="fn1"><p>See for instance <a
href="https://leah.is/posts/scaling-the-mastodon/"
class="uri">https://leah.is/posts/scaling-the-mastodon/</a>, <a
href="https://mijndertstuij.nl/posts/scaling-mastodon-community/"
class="uri">https://mijndertstuij.nl/posts/scaling-mastodon-community/</a>,
<a
href="https://blog.freeradical.zone/post/surviving-thriving-through-2022-11-05-meltdown/"
class="uri">https://blog.freeradical.zone/post/surviving-thriving-through-2022-11-05-meltdown/</a>,
<a
href="https://nora.codes/post/scaling-mastodon-in-the-face-of-an-exodus/"
class="uri">https://nora.codes/post/scaling-mastodon-in-the-face-of-an-exodus/</a><a
href="#fnref1" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn2"><p>Mansoux, A., & Roscam Abbing, R. (2020).
<em>Seven Theses on the Fediverse and the Becoming of FLOSS</em>. In
The Eternal Network : The Ends and Becomings of Network Culture
(pp. 124–140). <a
href="https://monoskop.org/images/c/cc/Mansoux_Aymeric_Abbing_Roel_Roscam_2020_Seven_Theses_on_the_Fediverse_and_the_Becoming_of_FLOSS.pdf">https://monoskop.org/images/c/cc/Mansoux-Aymeric-Abbing-Roel-Roscam-2020-Seven-Theses-on-the-Fediverse-and-the-Becoming-of-FLOSS.pdf</a><a
href="#fnref2" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn3"><p>See <a
href="https://txt.lurk.org/how-to-run-a-small-social-networking-site/"
class="uri">https://txt.lurk.org/how-to-run-a-small-social-networking-site/</a>
and <a href="https://txt.lurk.org/ATNOFS/"
class="uri">https://txt.lurk.org/ATNOFS/</a><a href="#fnref3"
class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn4"><p>See the Jenga Computing entry of rosa’s Ecofeminist
Dictionary. De Valk, M. (2023). <em>rosa’s Ecofeminist Dictionary
(rED)</em>. In Sun Thinking. Solar Protocol. <a
href="https://solarprotocol.net/sunthinking/devalk.html#jenga-computing"
class="uri">https://solarprotocol.net/sunthinking/devalk.html#jenga-computing</a><a
href="#fnref4" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn5"><p>From LURK in A Transversal Network of Feminist
Servers, 2022, <a href="https://txt.lurk.org/ATNOFS/"
class="uri">https://txt.lurk.org/ATNOFS/</a><a href="#fnref5"
class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn6"><p>Hmmm. Yes. We. Did. See <em>What is LURK</em> <a
href="https://web.archive.org/web/20150206001212/http://lurk.org/groups/meta-lurk/messages/topic/1Bqk3euF2ou2v8KsttTwd7/"
class="uri">https://web.archive.org/web/20150206001212/http://lurk.org/groups/meta-lurk/messages/topic/1Bqk3euF2ou2v8KsttTwd7/</a><a
href="#fnref6" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn7"><p>On top of that, several of us are involved in such
models in other parts of practice and personal lives, whether art
collectives, collectively run kindergartens, food coops or open
source projects. There is a limit to how many of these things you
can meaningfully take part in.<a href="#fnref7"
class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn8"><p>This text and our mailing lists are at <a
href="https://servus.at">servus.at</a>, <a
href="https://post.lurk.org">post.lurk.org</a> is sponsored through
<a href="https://eclips.is">Eclips.is/Greenhost</a><a href="#fnref8"
class="footnote-back" role="doc-backlink">↩︎</a></p></li>
<li id="fn9"><p>See <a href="https://monoskop.org/Community_servers"
class="uri">https://monoskop.org/Community_servers</a>.<a
href="#fnref9" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn10"><p>De Decker, K., Roscam Abbing, R., & Otsuka, M.
(2018). <em>How to build a low-tech website</em>. <a
href="https://solar.lowtechmagazine.com/2018/09/how-to-build-a-low-tech-website/">https://solar.lowtechmagazine.com/2018/09/how-to-build-a-low-tech-website/</a><a
href="#fnref10" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn11"><p>Mansoux, A., Howell, B., Barok, D., & Heikkilä,
V. M. (2023). <em>Permacomputing aesthetics: potential and limits of
constraints in computational art, design and culture</em>. Ninth
Computing within Limits. <a
href="https://limits.pubpub.org/pub/6loh1eqi">https://limits.pubpub.org/pub/6loh1eqi</a><a
href="#fnref11" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn12"><p>Roscam Abbing, R. (2021). <em>‘This is a
solar-powered website, which means it sometimes goes offline’: a
design inquiry into degrowth and ICT</em>. Seventh Computing within
Limits. <a
href="https://limits.pubpub.org/pub/lecuxefc">https://limits.pubpub.org/pub/lecuxefc</a><a
href="#fnref12" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn13"><p>De Valk, M. (2021, June). <em>A pluriverse of local
worlds: A review of Computing within Limits related terminology and
practices.</em> Seventh Computing within Limits. <a
href="https://limits.pubpub.org/pub/jkrofglk">https://limits.pubpub.org/pub/jkrofglk</a><a
href="#fnref13" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
<li id="fn14"><p>greetings to all the reply guys who are now dying
to let us know that computational resources needed for maintaining a
cache are minimal compared to the resources needed to process data:
you’re missing the point and the bigger systemic issues at stake,
here’s a little ASCII heart for your effort tho <3<a
href="#fnref14" class="footnote-back"
role="doc-backlink">↩︎</a></p></li>
</ol>
</section>
</body>
</html>
|