Alternate Implementations / Project Reboot
If someone were to ask my opinion, the first thing that someone forking Steem should do would be to reimplement a subset of the daemon in a modern, safe language. I've long thought about doing just that (although in service of a brand new, from-scratch chain).
All operations related to
custom_json should be moved out of the consensus daemon, and all web-interface supporting functions should be deprecated in the core. External indexers/databases should walk the blocks, building indices as necessary. The blockchain daemon should not concern itself with content or metadata, only consensus code and some bulk operations used by the external indexers so that they can get the data out (ideally just once per site). Ideally, it'd only have only one rpc read call, like
get_block, that returns all of the ops, real and virtual, for an external indexer to deal with.
I'd use a modern language like Go or Rust (ideally Go, as it's perfectly suited to multithreaded p2p network services like this).
A future hardfork should remove weird and esoteric features like SBD, timelock savings, the internal market, and escrow. It's not that they're bad, it's that they're unused and thus a maintenance burden for no corresponding UX benefit. If they are ever needed or wanted in the future, implement them at that time. A reimplementation should focus on the core functionality that is used widely every day. I'd even consider increasing the block interval by 2-3x; the 3s was chosen to do synchronous UI feedback for writes, but this is entirely unnecessary in practice and the same can be achieved clientside with an RPC call that returns the commitment status of a given transaction.
steemd (and presumably now
hived code) is, in my opinion, prototype-level from a network service point of view, one I think is likely shared by anyone who's had to run it in production; this is not a reflection on the professionals who produced it: they were optimizing for time, as it was a prototype when it was launched. It was designed to be one of only two components in the system: the server, and the JS web client. Those choices were outmoded the moment the blockchain gained significant traction: fine for a prototype, not for the long term. My decision to place
hivemind in the stack (and, it would appear, inadvertently half-naming this new chain in the process) was a reflection of this state of affairs.
Production-grade network services should gracefully degrade, not outright crash. A lot of work was done shoring up the C++ implementation and it's much better now than it was some years ago (it no longer returns invalid json on rpc errors, for example), but ultimately I just don't think that blockchains should be using languages like C or C++ five or ten years from now. The time to start on a robust implementation of this chain's protocol is now, alongside external indexers (of which
hivemind should be the first of many). At such a time, a v2 of the p2p transport should be investigated: I think something based on json/http, or perhaps protobuf/http2 (grpc). If it's not HTTP(s) like a normal web app (I'm talking about p2p traffic, yes), then opportunistic, unauthenticated TLS (with forward secrecy) should be used on the peer links, but going with normal HTTP basically provides this for free.
Additionally, all of the RPC error messages should be specified formally (abandon forever the idea that "the code is the spec"—it's not) in a big list, with an integer numeric code for each, like the IRC protocol. This approach should be adopted by the indexer frontends as well, to simplify client development. (I posted a previous version of this a few minutes ago, with the wrong reward settings, so I deleted it and attempted to repost. "Missing posting authority" is not a useful or helpful error message for anyone; blockchain daemon error strings should never be passed through directly to a client in the presentation layer.)
There's no reason it needs to be so unreliable and hard to deploy, or so heavyweight. The heavy lifting and slicing and dicing of social network data can and should be left to a standard RDBMS. Focus on the p2p protocol, and the efficient storage and validation of consensus data, and let external, well-optimized web development stacks handle the rest. Things have already been moving in this direction for several years on my insistence; the fork presents a good opportunity to finish the job and cut the cord.
Development should take place off of GitLab and GitHub, too. I've taken to self-hosting Gitea and it's worked out great; new projects should consider doing the same. (The golang module tools help greatly with this, and can happily pull in cryptographically-versioned dependencies from any URL.) Centralized, censorship-and-surveillance platforms are the past, not the future, and companies like Microsoft (owners of GitHub, and now npm) and GitLab eagerly collaborate with the military and other violent types. Avoid giving them your money or attention.
For collaboration, focus on tools that can be used safely and anonymously by anyone, such as IRC chat, email mailing lists, and self-hosted tools like Mattermost, Matrix/Riot, and Discourse. I've been using a self-hosted Mattermost install and so far it's been working great. Ensure that you test and confirm that whatever tools you choose for the project can be used conveniently via Tor. The popular options such as Gitter, Discord, and Slack cannot, and are discriminatory on privacy grounds and put your team membership at risk of both surveillance and censorship, as well as gate participation based on whether or not your community will agree to an abusive third-party TOS. Do not use them, and reject arguments that call for you to do so due to their popularity.
Good luck, Hive. Anyone working on the project or adjacent projects that has any questions, please feel free to drop me an email. My contact information can be found on my website. I am happy to contribute time and expertise as necessary.