[DIP-37] Increase of validator limit per operator

Hi @Yuting

Your contribution to the DAO, from the monitor SSV to the discussion you participate in can probably rival some one the longest standing members of the DAO.

So, as I have said privately, my aim was not to disparage your contribution to the DAO or label you or Linko as a bot.

I made an observation which has not been discussed in the DAO thus far, and which I believe is an important one.

Both of you have brought the SSV DAO it’s first temp check, which is undoubtably beneficial for the DAO. While the temp check probably used AI for a bit of structure and crossing the language barrier, that’s what AI is good for. I also know that the concerns you and Linko raised there are your genuine concerns when it comes to how the MMs disclose what they do, and what could the DAO do about that.

For the other concerns you have raised for this specific proposal, I have nothing more to add to what I have already said above. It comes down to what the DAO wants to do, and as you have proposed a temp check for MMs, feel free to propose a temp check for what you think would help this issue, or reach out to the members of the VOC who are already working on this :folded_hands:

3 Likes

@Ivan really good points, I see what you mean.

That’s a strong point, makes total sense that if you’re securing that much ETH, investing more in solid hardware is totally logical and justified.

At that scale, like you mentioned, the real challenge is actually getting all that ETH/Validators in the first place. Since SSV stays fully permissionless, the idea that raising the cap will tend to centralize things a bit more basically loses weight. Operators still have to attract stakers, it’s not just about who can buy better hardware.

Big staking players like Lido, Binance and Ether Fi already run the show, so raising the validator limit per SSV operator won’t make much difference in the actual market dynamics of Ethereum staking.

Ethereum wants regular folks to be able to participate on the Beacon Chain at home with accessible gear okay, but for SSV Network, the numbers @Ethernodes shared show that being an SSV operator is more of a professional endeavor than a hobby/sideline/passion project.

With the current state of things, about 83% of operators (1,460 out of 1,769) are private and around 90% of validators run on private setups. It totally makes sense to raise the validator limit per operator so private operators can scale up more efficiently.

The ideological side of this is real but honestly the technical and economic reality outweigh it. I think I’m leaning in favor of DIP-37. :+1:

Thank you all for taking the time to answer my concerns! :grinning:

1 Like

Voting is now live :vertical_traffic_light:

https://snapshot.box/#/s:mainnet.ssvnetwork.eth/proposal/0xcf7ff20779343885c2df491651229c35650899bd2ecd66abe52ba6510baa434c

@fod
@spookyg
@derfredy
@h.m.23-0neinfra
@markoinether
@axblox
@flo
@thomasblock
@yuting
@damon
@lemmagov
@llifezou
@blockside
@sigmaprime
@kenway
@hashkeygov
@Ethernodes
@p2pgov
@chainupgov
@kiln
@Allnodes

2 Likes

hello all!

~ four months ago i voted YES for DIP-29 which allowed increasing validator limit from 500 to 1000 ( => x2 ), now we are talking about increasing the limit from 1000 to 3000 ( => x3 ). in another four months, will we be talking about increasing the limit from 3000 to 12000 ( => x4 ) ?
as no one is talking about it and i’m probably not very up to date - don’t we have to take pectra into account as well? would appreciate if someone could enlighten me.

furthermore, i would vote for less emotional, more diplomatic and respectful discussions here as i feel like mood was heating up during some of the latest proposals.

thank you.

2 Likes

@flo Thank you for your message, and Hi :waving_hand:

The limit increase simple represents what’s possible with the current software stack and the protocol. What people make of it is up to them and their business models (see my remarks on the Temp Check below).

We, the DAO, should allow for maximum possibilities; therefore, I support increasing the maximum to 3000 validators, provided this doesn’t require adding new hardware resources (I’m not an operator, but this seems to be the general understanding).

As far as I understand the data, this number is safe for the current state of the network. If there are more updates to the network, like we’ve seen with the Alan fork, the DAO should increase the limit even further, and let the “market decide” what people do with this new possibility. So, yes, I think there could be more of such increases, but I have not heard of any fundamental breakthrough for the next couple of weeks/months!?

As for your comment on Pectra, I assume you mean network and operator fees that account for effective balance, right?

I think you’re aware of the adjustments made to the Incentivized Mainnet Program, which accounts for some of these effects.

As for the support of Pectra consolidated validators on a fee level, I have two thoughts:

  1. I’ve been a strong advocate myself and worried that not implementing a change to the fee model right from the get-go as a response to Petra is a mistake. It turns out I was wrong, and the core team’s estimate that only a very small number of validators would consolidate was correct. Pectra.info shows me that out of +1M Validators, only 3,663* did consolidate. Given the high costs of implementation and the extra complexity, it was wise to wait and see.
  2. The decision was also delayed because we’ll have more flexibility when SSV 2.0 and the SSV Chain arrive. As far as I understand the situation, the DVT component of SSV (yes, we have DVT and bApps) could itself be build as a bApp and allow for much more flexibility when it comes to fee models or other business models. My understanding is that all of this will be revisited once that vision has progressed, allowing for solutions far beyond what we think it could be today.

*Not entirely sure if the number is 100% correct, as the chart on the page looks different, but it is very low.

As to your comment on the heated debates, I’m 100% sharing your sentiment that it has to end. The good news is, it has been sorted in private between the conflicting parties, and most of it is due to misunderstanding as a result of asynchronous work patterns, language barriers, and individual needs. We had many calls in the last weeks and I’m especially happy to present a major breakthrough with the [TEMP CHECK] Revisiting the Operator Marketplace and Fee Models.

This is a direct result of the very first (and only) heated debate surrounding DIP-32. It’s worth noting that the remarks we’ve seen from others above, which led to certain emotions , are also rooted in the problems presented in this temp check. I wish the temp check had hit the forum earlier, but it is a difficult and complex matter. But finally, all of these discussions around small vs. big, public vs. private, fees vs. rewards have a home and I hope everyone feels their opinion heard and we can return to the pre DIP-32 state of the DAO, which is what I always referred to “the kindest DAO in the space” (I think you heard me say that in public many times :smile:)

So, let’s hug @everyone :people_hugging: and keep the good questions coming.

Ben

3 Likes

I am in favor of this proposal and will be voting as such shortly. I agree that this is a big efficiency improvement for large operators, and I don’t see any of the possible downsides as significant.

However, addressing the centralization issue, I don’t see this as a meaningful driver of centralization. Although our public operator market and the general distribution of validators across operators should be improved, there are many other factors clearly causing these problems, and the validator limit isn’t one of them. As others have said, the recent Temp Check post is the effort to start addressing them.

And related to the hardware requirements, if we do want to leave node operation open to those with lower-end hardware while allowing full utilization of high-end hardware, maybe we should add a feature that allows operators to set their own limits so they can guarantee that they stay within the limits of their machines.

2 Likes

Interesting point, but this would apply to public operators only, right? Can you give me an example of a public operator that has taken on too many validators? :grin:

2 Likes

Haha yeah… This probably won’t be an issue until we address the other problems and improve the health of the operator market. But this actually did happen in early mainnet when the requirements were higher. Some operators got dumped on unexpectedly, and they couldn’t handle the load and had to upgrade their setups.

And yes, this might only be relevant to public operators, assuming all private operators have some control over the validators added to them. However, this might not always be the case, and there’s probably no disadvantage to making it optional for everyone.

Overall, letting operators define an optional limit just seems like a low-cost way to avoid this issue entirely.

2 Likes