[DIP-37] Increase of validator limit per operator

Proposal Summary

This proposal, if passed, will require the ssv.network DAO Multi-Sig Committee (hereinafter: “MC”) to batch and execute relevant transactions to update relevant smart contracts to enable operators on the ssv.network to run operators with 3000 validators per operator, from the previously available 1000 validators per operator.

Motivation

Since DIP-29, operators were limited to having only 1000 validators per operator. This was due to the DAO’s research showing that at that point in time, having a higher limit would result in an unreasonable resource usage on operator machines and would make the ssv.network less efficient as a whole. However, this limit increase has been requested by both the professional and solo operator community for a very long time to be ever higher.

Therefore, since December 2023, and subsequently since DIP-29, the DAO has understood that this limit needed to be raised. With this in mind, the DAO has been hard at work to optimize resource usage and increase this cap in order to reduce operational overhead for operators and satisfy the ssv.network community’s needs. This has been achieved with the new Alan fork, which went live at the end of 2024. Now, with sufficient statistical data, the cap of 1000 can be raised to 3000 validators per operator while maintaining the existing computational resources used. This effectively reduces operator overhead.

This reduction in overhead will result in more streamlined operations for operators, less cost and potentially improved performance across the board.

Proposal particulars

  1. Execution Parameters

Execution Parameters

The proposed change enables operators to manage 3.000 validators.

To perform this change, the MC will invoke a specific function (already implemented in the SSV Network contract) with the parameter 3000 in a single transaction. This function can be called only by the owner of the contract, that is the MC.

4 Likes

Hey everyone, sharing some thoughts from AXBLOX.

So with DIP-37, I get why people want the validator limit raised, less overhead, lower operational costs, better efficiency. But I worry a bit about how this plays out for smaller operators like us.

We’ve been a verified operator for a while but still only have 1 validator directly. We’re lucky we also run validators through Lido’s Simple DVT and Ether Fi’s programs, but that’s not the same as getting stakers to choose us on our own.

If the limit jumps to 3,000 per operator, it’s great for the big, well-known teams, they can pack more validators into the same hardware, cut costs, lower fees and attract even more stake. But for smaller or solo operators, it makes the gap even wider.

Right now there’s ~127,000 validators on SSV so technically just about 42 operators could cover the whole network under this new cap. That’s efficient, but it puts more eggs in fewer baskets. If just a few big operators go down or get attacked, that’s thousands of validators impacted at once.

Since most of the time each validator is run by a cluster of 4 operators, losing one still means the validator works as long as the other 3 are fine but it does cut into the safety margin. If one big operator with a huge share goes down, all those clusters instantly lose that piece of redundancy.

Swapping in a replacement at that scale could be messy and risky because adding a new operator means re-running DKG to reshare the key parts and update every affected cluster.

Plus, with fewer operators needed overall, it might discourage new people from spinning up nodes. Over time that could shrink the diversity that makes SSV strong.

I’d love to see the DAO match this limit increase with something to balance it, maybe more programs that delegate stake to smaller operators, or some mechanism that stops a handful of operators from dominating. Otherwise it’s gonna be really tough for smaller verified operators like us to grow on our own.

Just putting this out there, curious if anyone else feels the same.

1 Like

Valid concerns!

Watch out for a Temp Check presented by @GBeast and some other folks hitting the forum in the next 1-2 weeks. We need your input on it. I’m eager to see that discussion take off.

Thank you!

Ben

1 Like

Enabling a single operator to run more validators reduces the need for multiple operators, significantly improving operational efficiency and lowering server costs — particularly for those managing a large number of private operators. Since this capability is already within reach, lifting the current limit is not only reasonable, but also a strategic step toward greater scalability and sustainability.

Hey, this is our approach to this proposal!

First of all, we welcome the proposal and the initiative to increase the validator limit per operator. Post-Alan Fork, it makes sense to begin exploring higher capacity boundaries per operator, and we see this as a natural evolution in the network’s scalability.

That said, some concerns before expressing full support:

  1. Operator Performance at Scale
    As of today, there’s no public evidence that clusters composed of operators running more than 1,000 validators each — up to 3,000 — can maintain strong and consistent performance. Looking at Hoodi Explorer, the largest operator currently manages under 700 validators. We would feel more comfortable supporting this proposal if there were stress-test results available showing performance under a 3,000-validator load.
  2. Call for Stress Testing
    If such tests have already been conducted (on testnet or otherwise), we’d appreciate it if the data could be shared with the community. If no such test exists, we believe this increase should be preceded by a public stress test to validate the robustness of the proposed limit.
  3. Phased Rollout for Verified Operators
    A potential middle-ground could be enabling the 3,000-validator limit only for Verified Operators initially. These operators have already demonstrated high performance and reliability and could help validate the upgrade in a controlled way on mainnet before opening it to all.

We’re supportive of scaling the network — and confident that it can be done — but believe caution and validation are key as we move into higher-load configurations.

About the point raised by @AXBLOX, we don’t see how this could negatively affect smaller operators — as long as they are verified. On the contrary, increasing the validator cap per operator should allow for better scalability and potentially a lower cost per validator.
It’s true that final users might prefer operators with a higher number of validators, but we believe this is less about the current validator count and more about each operator’s strategy, reliability, and effort to attract validators.

Looking forward to hearing more from the core team and community on this.

Hi @Ethernodes, appreciate your take and totally agree on the stress testing, I fully support that step before moving forward.

Just to clarify our side, the issue for smaller operators isn’t about being verified or not, it’s about scale and margins. Bigger operators running thousands of validators get lower cost per validator, they can drop fees and bundle way more stake on the same hardware. Smaller verified operators with just 1–2 validators can’t match that pricing power, so we risk getting squeezed out over time.

Also, with the new cap, technically just ~42 operators could handle the whole network right now. So if we end up with a few mega-operators, any outage, attack or misconfig hits thousands of validators at once and cuts into the fault tolerance we rely on. Rotating in new operators isn’t plug-and-play, it means redoing DKG and reconfiguring all clusters, which is heavy lift if it happens under stress.

So yeah, the higher limit makes sense for scaling, but it does shift the power more toward big players. I’d just like to see something added, like incentives or delegation programs, so smaller verified operators don’t just stay stuck at 1 or 2 forever.

Decentralization is a core value of Ethereum, and SSV is built to support that same ethos. This proposal clearly risks pushing us closer to centralization if we don’t plan for it properly.

Appreciate everyone weighing in, good to see this part of the conversation too!

Yuting here — thanks to @Ivan and the core team for putting DIP-37 on the table. I’m in favour of raising the cap, but I’d like to flag three items before flipping the switch.


1. Fee-compression & market structure

What changes
Alan re-organises validator traffic so that one slot now triggers “one big message” instead of “one message per validator”.
Result: when an operator triples its validator count, machine load rises only modestly. In practice cost per validator slides by roughly 60–70 %.

Why it matters
A large operator can remain profitable charging ~0.4 SSV per year, whereas a small 50-validator shop still needs around 1 SSV just to cover fixed hardware and monitoring. If a few mega-operators decide to price aggressively, we could see a race-to-zero that squeezes out the long tail and narrows real choice for stakers.


2. Client-diversity guard-rail

I guess 99 % of mainnet nodes still run Go-SSV. Scaling without checks risks locking in a monoculture.

Suggest: any operator serving > 1 500 validators must run ≥ 2 independent clients (e.g. Go + Anchor/Rust).


3. Telemetry & transparency

Let’s surface the Alan savings (-54 % CPU, -80–90 % bandwidth per val) in a live dashboard and require anonymous telemetry opt-in for nodes > 1 500 validators. Data-driven evidence will make the next jump (5 000? 10 000?) far less contentious.

Eager to hear everyone’s thoughts!

3 Likes

Good points Yuting, thanks for laying it out clear.

So basically: Alan fork cuts costs hard but big operators could crush small ones on fees. We need more client diversity to avoid a client monoculture (one bug in Go-SSV could hit everyone). Maybe find ways to encourage operators to adopt the upcoming Anchor too before increasing the validator limit per operator?

Hi @AXBLOX @Yuting !

Thank you very much for consistently engaging in the discussions!

While I’d like to address each of the points raised here, I’d like to note that its very important for the DAO to have genuine discussions, that come as grassroots efforts from actual concerns the DAO might have.

Here, you have raised the concerns:

  1. That small public operators might have difficulty remaining competitive if the per operator validator limit were to increase.

  2. Testing data as validation that this upgrade won’t cause any performance issues.

The DAO public operators have been hard at work identifying the issues they face, from the way the operator fee is structured, to operator discoverability and so on. However, the concerns they have not raised are the ones that have been raised here. I believe I know why this is the case. When I plugged the text of this proposal to CGPT I got the same concerns mentioned above.

CGPT and similar tools do not understand the nuance of a discussion and are far from being able to provide genuine feedback and this proposal and the concerns its raised here are the evidence of that and this is why.

  1. The small public operators CGPT mentions are not competing with “big operators”.

Big public operators can be seen as either public or private.

Small operators are not competing with any private operators because they have no interest in taking on validators that might choose public operators, nor can any validator choose said private operators. These private operators will definitely see a reduction of costs for their operations if this proposal were to pass.

Small operators are also not competing with “big public operators” because all public operators that currently exits have approximately 5k validators spread among them, so they will not see the gains CGPT mentions, in order to make “small operators” less competitive, because there’s almost no chance that 20+ public operators would somehow start merging to make this one operator of 3k validators.

  1. Testing data as validation of this upgrade, while in principle, a valid request, which I have requested information on from the labs team and the DAO, again, lacks pertinency and genuineness and it’s important that these questions have both, because, otherwise, we are wasting resources of an up and coming DAO and labs team on AI nonsense. This question has already been raised by some DAO core members, but as genuine, since they run operators and are interested how would a substantial increase in validators on their operator affect all parameters of their operator.

So I’d like to end on the same note that I started, I believe its great that you are consistently engaging in the discussions with the DAO, and any DAO is only as strong as the discussions people have publicly, which is why its very important that these discussions are based on genuine concerns and problems, instead of what a machine might think the problem is.

1 Like

Thanks @Ivan, I get your point and I agree, real talks matter. I do use AI to help dig through all the info (it’s a lot and it’s complex) but in the end I speak from my own experience and what I know.

AI helps me spot stuff I might miss, but it doesn’t decide for me. If I misunderstand something, this forum is here so I can learn and understand better.

Your point about public vs. private operators makes sense especially with the big public ones having about 5k validators total. So ~96% of the SSV network is running with private operators, you just made me realize that.

But if private operators benefit the most from the higher cap, won’t that pull even more ETH holders their way, especially since they usually have smoother UX?

Even if public operators won’t lose current validators, the new flow might lean more private.

Also, are we assuming that demand for public operators won’t really take off later? Just wondering if we should still care about keeping public operator diversity healthy in case they scale a lot later or if in reality most big stake will most probably go for private setups anyway?

I think, a generally accepted point is that raising the validator limit per operator will improve efficiency but also increases centralization risk by concentrating more stake with fewer operators.

The question is, do we want that and what are all the pros and cons?

Hey everyone my name is Matus and I am a core blockchain team lead at SSVlabs.

We are currently running three clusters with 3 000 validators on Hoodi — each with 4 operators — and one additional cluster with 7 operators.

Here are some stats that might be interesting.

CPU

We observe a 50–100% increase in CPU usage, which is expected due to the higher number of BLS duties to sign. Nevertheless, the overall increase is not significant in absolute terms, rising only from 0.3 vCPU to 0.4 vCPU on average.

1.6k cluster

3k cluster

Memory

There is also increase in memory consumption, but not significant from 1.1GB to 1.4GB. Still under 2GB

1.6k cluster

3k cluster

att rate/effectiveness

Attestation Rate: 99.99%
Effectiveness: 97.19%
Correctness: 99.74%
27 Proposals were assigned, 27 were executed successfully
MEV: 8 out of 27 proposals had an MEV relay
   Aestus: 1
   Flashbots: 6
   TitanRelay: 1
Sync Committee Rate: 94.45%

Amount of currently active validators: 3000

So performance is really good and HW resources did not increase significantly

General note
While the SSV node itself scales efficiently, the resource footprint of the underlying Ethereum clients can grow over time.

  • EL (Execution Layer) and CL (Consensus Layer) resource could rise alongside network traffic with higher validator count.
  • Operators should monitor EL & CL usage closely and size their machines according to the high‑load hardware recommendations published by their chosen clients.
    This ensures head‑room for future load spikes and prevents missed duties.
4 Likes

Thank you for your understanding @AXBLOX !

AI can do wonders for initial research and crossing language barriers, so I’m not against it per se, I’m just against it when it sprouts discussions for the sake of discussions.

To your question would a higher cap pull even more ETH to private operators, with the addition of their smoother UX.

Just for the sake of stating the obvious, private operators “can’t pull” any ETH since they are private, and no one can choose them on SSV.

For the part that is less obvious, if this was your question, that these private operators might have other channels through which they secure their ETH, and the fact that they can do this coupled with the increase in the validator cap, they would be “more competitive” since they can drive the fee down I can say a few things.

  1. From what I understand the private operator industry to be, they are targeting a type of ETH holder that would never choose a “random” public operator (even a verified one) due to several factors. The way these entities secure ETH being deposited with them, is first party KYC/KYB, dedicated support channels for the deposited ETH, dedicated client support, SLAs, legal contract slashing protection, liability assignment and so on. All of this is something no public operator offers at this time

  2. Even if everything under the paragraph above would not be an issue, I believe certain public operators have tried to, and anecdotally demonstrated, that the operator fee is not a factor for new ETH coming in to them. I believe there was an example of someone having great performance while keeping their fee at ~ 0.24 SSV for several months and have not gotten a single validator deposited to them. Some VOC members are currently hard at work at understanding all the reasons why this is the case, but its sufficient to show that the Alan fork and Pectra, while providing great optimizations, even when realized in the fullest capacity, will not lead to more validators for public or small operators.

Also, if the opposite were to be true, that indeed what allows these private operators to have so much ETH is the fact that they scale so well due to the recent SSV and ETH updates, and that this update might push that even more, I believe that we should not want a DAO or a project that stunts its technological supremacy because such supremacy might have negative externalities.

I believe that the public operators are a key component of the DAO and SSV as a whole. The question is striking a balance between the projects sustainability and the sustainability of public operators. Currently, I believe there are projections that the SSV DAO will generate around 300k SSV in fees for this year. 95% of these fees are being paid by the private operators, how can the DAO keep this essential funding while also maintaining public operators until inevitably, these private entities understand there’s value in choosing public operators, is something the DAO needs to decide.

As for your final question, do the pros outweigh the cons, as I noted in my last paragraph, its on the DAO to decide, one proposal at a time which direction it wants to go in.

Also, do let me know if what my college Matus has provided from a technical perspective answers your concern regarding the stress test aspect and performance requirements of this update. :folded_hands:

1 Like

Thanks for sharing the data you manage on thestnet for this upgrade. This was a real concern for us as before Alan fork, operators with 500 validators were having a issues and a poor performance (at least, it was in our case)!

We fully support the increase after showcasing testnet results.

1 Like

Hey @AXBLOX! Get your concerns and pulled out some operators data:
→ 1.769 operators registered from which 1.460 are private & 309 are public
→ For those public, holding 12.056 keys, the average fee is 0.82 SSV.
→ For those 12.056 jeys, only 2.282 keys are assigned to validators with a fee lower thant 0.3 SSV.

About the concerns of ~42 operators that could handle the whole SSV network, its a bit of a simplification. Operators are choosen by protcolos & users, meaning that for this to happen, you would need not only operators to colide, but also protocols. And even then, you should consider all middle and small size operators to close operations and its users to funnel validators to the big ~42 players. We honestly don’t see this as a concern.

Thanks for rising these points, it pushed us to actually get all the SSV operators info and analyze it. Happy to share the data :slight_smile:

1 Like

Thanks for sharing and clarifying that SSV clusters can indeed easily support 3000 validators. Though I’m wondering why these 3 x 3000 validator clusters are not visible on the SSV explorers on Hoodi?

@Ivan

Testing data as validation of this upgrade, while in principle, a valid request, which I have requested information on from the labs team and the DAO, again, lacks pertinency and genuineness and it’s important that these questions have both, because, otherwise, we are wasting resources of an up and coming DAO and labs team on AI nonsense.

Why is a valid request to test a major upgrade first before deploying it on mainnet a waste of resources, generated by AI or not? You acknowledge it in the opening part of your sentence, yet dismiss it simply because it may have been generated by an AI. That in itself seems nonsensical to me.

2 Likes

Hey @BumpyTale because we are running our devnet on Hoodi too and there we do all the stress testing. So it’s the same eth netwrok, but not same SSV network.

1 Like

@BumpyTale

In my opinion, if a request is not felt by someone, like you as a public operator (which it was felt by others as noted in my initial response), but is generated by asking AI, tell me what I should be concerned about, does not amount to a request that is genuine, and as such creates discussions for the sake of discussions. While we could say that AI might catch an issue that you yourself might not, its very important to have that check of AI, and think to yourself, do I feel that this is actually an issue for me, or someone else? If so, then its a valid question.

I would not say that I have dismissed the question regarding data validation at all (AI inspired or not), in my very response I said we will provide the data requested, which Matus has done. If you happen to have additional questions, feel free to ask. :folded_hands:

As noted in my previous responses, this is my opinion on AI and how it should be used for the benefit of the DAO and the project. People can have different views and that’s fine, as I’ve said, I’m grateful to @AXBLOX and @Yuting for engaging in the discussion regardless of AI. :folded_hands:

1 Like

Thank you to everyone who took the time to answer my open questions. It gives me more confidence to decide on this DIP-37.

Also thank you @matus for the technical data you and the team at SSV Labs are running on your devnet on Hoodi.

I tend to agree overall that this proposal could be a good thing.

Yes, ETH holders will most probably continue to choose private operators for easy UX, support, legal protection, etc. In general, I agree that we should not hold back great tech just because it might have some side effects.

On the tech and business sides, this DIP-37 makes sense for SSV. Here I’m trying to act more like a guardian of the Ethereum ethos especially the centralization side effect this proposal could bring.

I am a strong advocate of decentralization in the sense that no single person, group of people or company controls it. Power is spread out so no one can shut it down or censor it.

Of course, we would still be far away from any real centralization even if this DIP-37 passes, I agree but this proposal will push toward reducing DVT’s decentralization by letting big operators run way more validators alone, concentrating power instead of spreading it across more diverse operators.

From a philosophical point of view, I will dare make an analogy with Alon Muroch’s position on the upcoming Robinhood Chain. He highlights how big centralized players can use Ethereum as just a backend to make money, risking true crypto projects getting sidelined.

https://x.com/AmMuroch/status/1940403874180874503

I can see valid comparisons here with the validator cap limit increase or big operators in staking. If power concentrates too much, we lose the open, decentralized spirit Ethereum was built for.

I haven’t decided yet, still on the fence. :grin:

1 Like

I’m taking the time here to play devil’s advocate because SSV Network isn’t a typical corporation focused only on short-term profits, I see it more as a public good for Ethereum.

If we assume SSV Network sticks to Ethereum’s ethos, I think we should keep pushing the tech forward in that same spirit.

As you might know, an important part of Ethereum’s rollup-centric roadmap is making sure anyone with 32 ETH and normal hardware can still run a validator at home, no need for high-cost, super-performant hardware.

Rollups push the heavy stuff to L2, keeping L1 light and decentralized so regular people can help secure the network without crazy server bills. That’s the whole point, scale up without losing the open, permissionless vibe.

Now in 2025, SSV 2.0 and others are trying to fix the lack of value flowing back to L1 and the fragmentation issues that L2 rollups brought to the Ethereum ecosystem. This situation is really the result of a collective choice made to keep Ethereum decentralized.

With that in mind, when I read this part of Matus’s post, it got me thinking:

Consensus clients (like Nimbus, Lighthouse, Teku, Prysm) scale with how many validators you run but they just give basic hardware requirements (CPU, RAM, disk, network), it’s up to big operators to scale their setups properly for lots of validators.

Based on what experienced node operators and staking providers recommend, we can estimate that running 1,000 to 3,000 validators means scaling up the hardware for the EL, CL, and SSV node. A solid starting point for ~3,000 validators would look like this:

12–16 cores, 48–64 GB RAM, 4 TB NVMe SSD, 200+ Mbps

By comparison, a typical solo home staker running just 1 validator with a full EL + CL node usually does fine with:

2–4 cores, 8–16 GB RAM, 1–2 TB NVMe SSD, 20–50 Mbps

So I’m bringing up this hardware reality check to back up my point. This validator limit increase mainly benefits big players with deep pockets, given the cost just for the hardware and internet.

Of course, any operator who wants to scale up needs to buy more hardware sooner or later, that’s normal. But when the limit goes up, big operators can pack way more validators onto the same machine or cluster so the cost per validator drops a lot. That’s good for efficiency, but it gives bigger operators, whether private or public, a clear cost advantage that smaller operators can’t match at the same scale.

The cost of buying the necessary hardware (or renting servers) and the internet connection still becomes a barrier to entry for new operators (public or private) who want to grow their small operation and stay competitive considering that the cost per validator goes down when you add more.

I’m still not fully decided about this proposal, but I’m trying to bring a different perspective here. I will admit it’s a tough one and it’s more complicated than just saying it won’t affect small operators and it is good to cut operational costs. To me, it’s more of an ideological debate.

Hi @AXBLOX

Devil’s advocate or not, its always good to have a discussion. :folded_hands:

When it comes to the hardware requirements of a prospective operator I think its important to take into consideration a wider picture.

At the end of the day its all about scale.

I believe its unrealistic to have a person secure 80-240 million USD worth of ETH (at current eth price of ~2.5k) on a machine + infra that costs less than 500 USD.

On the flipside, considering that 12-16 cores and other requirements that you have mentioned can be had for around 1.500 - 2.000 USD, while securing 80-240 million USD, is actually insane.

That would mean that you would recover your hardware cost in just one year (1k validators) or 4 months (3k validators) at an operator fee of 0.24 SSV (and current price of SSV at ~7 USD). In other words, I believe SSV’s tech is probably the best out there when it comes to performance, and the corresponding requirements.

However, the only problem with this, is how do you get those validators, which is what I was briefly touched on in my earlier post and something some VOC members are hard at work at.

2 Likes