One of blockchain’s core value propositions is immutability. From the early days of Bitcoin a major appeal of the technology was that once a transaction was on the blockchain, it was there forever – verifiable, transparent, and permanent. With the advent of Ethereum this immutability expanded from simple immutable transactions on a ledger to complex immutable web applications in the form of smart contracts. Write your code, deploy the contract to the blockchain, and the code is permanent. Users can audit it once and trust it forever.
But immutability is a double edged sword. If you accidentally deploy a bug, the bug is permanent too. Optimizations, fixes and new features require deploying a new contract and convincing your users to migrate, not to mention the complexity of ensuring that all their data and funds go with them. While immutability can be a godsend for trust and security, it can also create real problems for developers trying to build products that need to evolve.
Of course, blockchain developers created a solution: proxy contracts.
Upgradeable proxies solve the problem of immutability by separating storage from execution logic. In an upgradeable proxy pattern, the contract is split into two: a proxy contract and an implementation (logic) contract. The proxy holds the state – user balances and application state like variable settings, configuration and so on – while the separate implementation contract contains the execution logic that can modify that state. Now when you need to fix a bug or add a new feature, you simply deploy a new implementation contract and point the proxy at it. Users keep interacting with the same address, their balances are preserved, and you get the flexibility to improve your contract over time.
But upgradeability introduces a new trust assumption: When your contract calls an external upgradeable proxy, you’re not just trusting the current implementation you audited. You’re trusting whoever controls the upgrade mechanism not to deploy malicious code in the future. No matter how thoroughly you reviewed the original logic, the proxy administrator can change it at any time. This is directly antithetical to one of the core tenets of blockchain applications and blockchain in general: what good is an “immutable” smart contract if its code can be upgraded and changed at any time?
A Primer on How Upgradeable Proxies Work
Note: A repo with relevant sample code is available at the RealWorldProgrammer Github.
The proxy pattern achieves upgradeability through a clever separation of concerns. When a user calls a function on a proxy contract, the proxy uses a special opcode called delegatecall to execute code from a separate implementation contract. The critical property of delegatecall is that it runs the implementation’s code in the proxy’s storage context. This means that while it is the implementation contract’s code that runs, it is the proxy’s storage that is used and modified. Therefore, changing the underlying functionality of the proxy is as simple as changing which implementation address it delegates to.
There are two dominant patterns in production use today:
Transparent Proxies
The Transparent Proxy pattern uses a separate ProxyAdmin contract that controls upgrades. The proxy checks if the caller is the admin, and if so, executes admin functions like upgradeTo() directly on the proxy itself. If the caller is anyone else, it delegates to the implementation contract. This prevents the admin from accidentally calling implementation functions, but requires deploying an additional admin contract and managing another key.
UUPS Proxies
The UUPS (Universal Upgradeable Proxy Standard) pattern takes a different approach by placing the upgrade logic in the implementation contract itself. The implementation inherits from OpenZeppelin’s UUPSUpgradeable and includes an _authorizeUpgrade() function that controls who can upgrade. This is more gas-efficient and doesn’t require a separate admin contract, and also means that the upgrade logic itself can be upgraded, including upgrading it to an implementation without an upgrade function, preventing future upgrades and making the implementation code immutable forever.
Here’s a minimal example from the research repository. The SimpleLogic contract is a UUPS implementation that stores a single value (initially 42):
contract SimpleLogic is Initializable, OwnableUpgradeable, UUPSUpgradeable {
uint256 public value;
/// @custom:oz-upgrades-unsafe-allow constructor
constructor() {
_disableInitializers();
}
function initialize(address initialOwner) public initializer {
__Ownable_init(initialOwner);
__UUPSUpgradeable_init();
value = 42;
}
function setValue(uint256 newValue) public {
value = newValue;
}
function _authorizeUpgrade(address newImplementation) internal override onlyOwner {}
}
The proxy that fronts this implementation is straightforward: it wraps OpenZeppelin’s ERC1967Proxy and exposes its _implementation() function for convenience in obtaining the implementation address (note that this last part is not required – there are other ways to get it):
contract SimpleProxy is ERC1967Proxy {
constructor(address implementation_, address initialOwner)
ERC1967Proxy(
implementation_,
abi.encodeWithSignature("initialize(address)", initialOwner)
) {}
function implementation() public view returns (address) {
return _implementation();
}
}
The proxy/logic pair is deployed by first deploying the implementation contract, then deploying the proxy and passing it the implementation address and an initial owner (who will own the proxy contract). Upon deploying the proxy, its implementation address will be set to the address of the logic contract, allowing users to call any function on the logic contract via the proxy contract’s address. When the proxy receives a call to any function not defined on the proxy itself, it will delegatecall to the implementation address, for example, when a user calls setValue(). When you as the developer need to make changes to the code, like changing the setValue() function, you simply deploy a new implementation contract with the updated code, then call upgradeToAndCall() (a function included as part of ERC1967Proxy), passing it the new implementation address.
The key insight here is that regardless of which proxy pattern you use, whoever controls the upgrade mechanism has absolute control over the contract’s behavior. In the Transparent Proxy pattern, compromising the proxy admin’s key means you can deploy any implementation you want. In the UUPS pattern, compromising the implementation’s owner (or any mechanism which passes the _authorizeUpgrade() check) gives you the same power. Even more importantly, for all proxies, upgrading changes the contract’s behavior without changing the proxy’s address. Users (including contracts) continue to interact with the same proxy address before and after the upgrade. This means an attacker who gains upgrade access can modify a proxy contract’s functionality while it continues to receive calls from dependent contracts and users that have no idea it changed.
The PAID Network Attack: March 5, 2021
The problems described above aren’t theoretical. On March 5, 2021, a proxy upgrade vulnerability materialized in the PAID Network hack. In this incident, an attacker gained control of the upgrade mechanism and deployed a malicious implementation contract that could burn tokens from any address and mint the entire available supply to themselves. They then proceeded to do exactly that – burning nearly 60 million PAID tokens worth $180 million from existing holders – and then re-minting them all to their own address. While the PAID network’s response was swift, the damage was painful: in only 17 minutes before the team noticed the abnormal transactions, $3.16 million worth of the stolen PAID tokens were swapped for ETH on Uniswap and then promptly liquidated into the attacker’s accounts. For obvious reasons, the remaining PAID tokens sitting in the attacker’s account quickly became worthless, representing a total loss for the rightful owners of the tokens and a scar on PAID Network’s reputation that would never recover. The attack succeeded not because the audited code was insecure, but because one of the network’s contracts was upgradeable, and the upgrade mechanism itself was not sufficiently protected.
What makes this attack particularly relevant is that it exposes two distinct security problems. The first – protecting your own proxies from malicious upgrades – is what the PAID team failed to address. The second problem applies to anyone who calls external upgradeable contracts: how do you protect your contract when a dependency you don’t control is compromised? If your lending contract calls an external price oracle that is maliciously upgraded, your users lose funds even though the vulnerability wasn’t in your code.
Now, let us examine both of these problems in depth, with particular focus on defensive strategies for contracts that call external upgradeable dependencies.
Two Separate Security Concerns
The PAID Network attack illustrates a critical distinction that’s often conflated in discussions of proxy security. There are actually two separate problems with different threat models and different solutions.
Problem 1: Protecting Your Own Proxies
This first problem is what PAID Network failed to solve: If you deploy an upgradeable proxy, you need to protect the upgrade mechanism itself at all costs. PAID’s implementation used a Transparent Proxy pattern which, while not inherently insecure itself, used a single externally-owned account to control the ProxyAdmin contract. No multisig, no timelock, no governance … nothing. When that single private key was compromised (the attack vector was never publicly disclosed), the attacker had immediate, unrestricted access to deploy any implementation they wanted. They chose to deploy one with custom burn() and mint() functions hardcoded to allow only their own address to bypass all the normal authorization checks in the original audited code and do whatever they wanted. In fact, the original audited implementation contract didn’t even have a mint() or burn() function! The attacker essentially copy-and-pasted PAID’s original contract, added a couple functions, and deployed their own version, swiftly using the compromised private key to upgrade the proxy to use it.
The solution to problem 1 is relatively straightforward in theory: use better access controls. A multisig upgrade mechanism would have required that multiple keys sign the upgrade transaction, so compromising a single key wouldn’t have been enough. A timelock controller would have added a mandatory delay between proposing an upgrade and actually executing it, giving the community time to notice the pending transaction and alert the proper channels before it happened. DAO governance requires a public proposal and token holder vote before any upgrade can proceed. These mechanisms don’t prevent upgrades, but they make malicious upgrades much harder to execute silently, even if the threat is from inside.
Problem 2: Protecting Yourself From Maliciously Upgraded External Proxies
The second problem is more subtle and affects anyone building contracts that call external upgradeable dependencies. Suppose you’re building a lending contract that needs token prices. You integrate with an external price oracle that’s implemented as an upgradeable proxy. You audit the current implementation thoroughly, verify it’s secure, and deploy your contract with calls to that oracle. Everything works perfectly.
Then one day, without warning, the oracle’s owner deploys a malicious implementation. Maybe their key was compromised like PAID’s. Maybe they were always malicious and had simply been waiting for the perfect moment that would maximize the payout. Either way, your lending contract is now calling a price oracle that returns manipulated prices. Users of your protocol get liquidated when they shouldn’t be. Underwater positions don’t get liquidated when they should. Your users lose funds, your reputation is destroyed, and the vulnerability wasn’t even in your code.
This is problem 2. You simply do not (and cannot) control the external proxy. You can’t add a timelock to it or change its admin to a multisig. You’re entirely dependent on whoever owns that proxy to maintain its security, and you trust them unconditionally to do it. If they fail (or choose not to), you fail. The proxy pattern means you’re not just trusting the code you audited, you’re trusting the code’s maintainers, you’re trusting every future version that could be deployed, and you’re trusting the security of the upgrade mechanism even though you have no control over it. This is why OpenZeppelin warns:
Using upgradeable proxies correctly and securely is a difficult task that requires deep knowledge of the proxy pattern, Solidity, and the EVM
– OpenZeppelin
The rest of this post focuses on problem 2, because it’s the one that affects you as a contract developer integrating with external dependencies. The solutions aren’t about preventing upgrades (you can’t), but about detecting them, verifying them, and failing safely when an unapproved implementation appears.
Defense Strategy #1: Approved Implementation Registry
Before we begin, beware that these examples are for illustrative purposes only and are not intended for secure production use. That said, the core defensive pattern for problem 2 is straightforward: before calling an external proxy, verify that its current implementation is on your approved list. This check happens at runtime, on every call, and reverts if the external dependency has been upgraded to an implementation you haven’t explicitly approved.
The cleanest architecture for this is a separate registry contract. Here’s the example ImplementationGuard contract from the research repository:
contract ImplementationGuard is Ownable {
mapping(address => mapping(address => bool)) private approvedImplementations;
event ImplementationApproved(address indexed proxyAddress, address indexed implementation);
event ImplementationRevoked(address indexed proxyAddress, address indexed implementation);
constructor(address initialOwner) Ownable(initialOwner) {}
function approveImplementation(address proxyAddress, address implementation) external onlyOwner {
require(proxyAddress != address(0), "Invalid proxy address");
require(implementation != address(0), "Invalid implementation address");
require(!approvedImplementations[proxyAddress][implementation], "Already approved");
approvedImplementations[proxyAddress][implementation] = true;
emit ImplementationApproved(proxyAddress, implementation);
}
function revokeImplementation(address proxyAddress, address implementation) external onlyOwner {
require(approvedImplementations[proxyAddress][implementation], "Not approved");
approvedImplementations[proxyAddress][implementation] = false;
emit ImplementationRevoked(proxyAddress, implementation);
}
function isApproved(address proxyAddress, address implementation) external view returns (bool) {
return approvedImplementations[proxyAddress][implementation];
}
}
The registry is deliberately simple and non-upgradeable. It stores a mapping of proxy addresses to approved implementation addresses and provides functions to approve, revoke, and check approval status. The owner controls which implementations are trusted and is the only one who can approve or revoke implementations. This could be further secured in the same way as a proxy – multisig, timelocks, DAO governance. Importantly, the registry is separate from both your main contract and the external dependencies it tracks, which means it’s reusable across multiple contracts in your system and can be upgraded independently if you choose to make it upgradeable.
Your core contract(s) then use this registry to gate all external calls. Here’s a GuardedProxyClient, for example:
contract GuardedProxyClient {
ImplementationGuard public immutable guard;
ISimpleProxy public immutable externalProxy;
constructor(address guardAddress, address proxyAddress) {
guard = ImplementationGuard(guardAddress);
externalProxy = ISimpleProxy(proxyAddress);
}
modifier onlyApprovedImplementation() {
address implementation = externalProxy.implementation();
require(
guard.isApproved(address(externalProxy), implementation),
"Implementation not approved"
);
_;
}
function safeSetValue(uint256 newValue) external onlyApprovedImplementation {
ISimpleLogic(address(externalProxy)).setValue(newValue);
emit ValueSet(newValue);
}
function safeGetValue() external view onlyApprovedImplementation returns (uint256) {
return ISimpleLogic(address(externalProxy)).value();
}
}
The pattern is simple: before every external call, read the proxy’s current implementation address and check if it’s approved in the registry. If not, revert. This means if the external proxy upgrades to a new implementation you haven’t reviewed, your contract stops working immediately and safely. Your calls don’t execute against unknown code.
Tradeoffs
That last point is a tradeoff worth examining. When an external dependency upgrades, your contract becomes non-functional until you approve the new implementation. For some applications, this downtime is unacceptable. For most though, it’s far preferable to the alternative of blindly calling a potentially malicious implementation. The key insight is that you’re trading availability for security. Your contract might pause, but it won’t be exploited.
In the PAID Network scenario, this pattern would have protected any contract that integrated with the PAID token. When the attacker deployed their malicious implementation and upgraded the proxy, the next call from an integrated lending contract would have hit the onlyApprovedImplementation check, seen that the new implementation wasn’t approved, and reverted. The lending contract would stop working, but it wouldn’t call the malicious burn or mint functions. In fact, it wouldn’t be allowed to call any of the new implementation contract’s functions. Users of your lending protocol couldn’t interact with PAID through the lending contract, but they would be protected against anything malicious the new implementation contract might do.
With respect to legitimate upgrades, the approval workflow is relatively simple as well: your security team monitors for upgrades (more on this in Defense Strategy 3), reviews the new implementation’s bytecode, verifies it’s safe, and then calls approveImplementation() on the registry. The moment the new implementation is approved, your contract resumes normal operation. For legitimate upgrades from trusted dependencies, this might happen within hours. For suspicious upgrades, you might choose never to approve and instead migrate to a different dependency.
There are other tradeoffs as well. The gas cost is real but manageable. Each implementation check requires reading the proxy’s implementation address (one external call) and checking the registry’s mapping (one SLOAD in the registry contract). This adds roughly 2,100 gas per guarded external call. For high-frequency operations this matters. For most DeFi interactions (swaps, liquidations, oracle queries), it’s noise compared to the gas cost of the actual operation.
One detail worth noting: the registry needs to be able to read the proxy’s implementation address. Standard ERC1967 proxies store the implementation at a specific storage slot, and many expose an implementation() getter. If you’re integrating with a proxy that doesn’t expose this, you’ll need to read the storage slot directly or find another way to determine the current implementation. The research repository’s SimpleProxy includes this getter for convenience, but it’s not strictly required by the ERC1967 standard. OpenZeppelin recommends:
To get this value clients can read directly from the storage slot … using the
eth_getStorageAtRPC call.
Defense Strategy #2: Choose Dependencies with Strong Upgrade Governance
The approved implementation registry gives you control over which versions you trust, but it doesn’t prevent the upstream dependency from being upgraded. It only lets you react after the fact. A better defense layer is choosing dependencies that make malicious upgrades harder to execute in the first place. This isn’t about adding protections to your own contract. It’s about evaluating the governance mechanisms that control the external proxies you depend on and preferring those with stronger safeguards.
Consider the upgrade mechanism as a spectrum of risk. At the highest risk end, you have what PAID Network had: a single externally-owned account with the ability to upgrade instantly. Compromise that one private key and you can deploy any implementation you want with no warning and no delay. The attack surface is minimal (one key), the execution is instant (no time for detection), and the blast radius is complete (full control over contract behavior).
Moving up the spectrum, a multisig requirement improves things substantially. A 3-of-5 multisig means an attacker needs to compromise three separate keys instead of one. This is harder but not impossible. The attack might require social engineering attacks on multiple team members, or exploiting a vulnerability in how the multisig manages keys, or simply waiting for an opportunity where multiple signers are careless. The critical limitation is that there’s still no advance warning. Once the attacker has enough keys, they can execute the upgrade immediately.
A timelock controller changes the game entirely. With a timelock, upgrades must be queued publicly before they can be executed. The typical delay is 48 hours, though some protocols use longer windows. During this delay, anyone can inspect the proposed upgrade. The CallScheduled event fires immediately when the upgrade is queued, making it visible on-chain to anyone monitoring the timelock contract. This gives defenders time to react. If a malicious upgrade is queued, the protocol team can pause the contract, users can withdraw funds, exchanges can delist the token, and the community can coordinate a response. The attack is no longer stealthy.
In the PAID Network scenario, a 48-hour timelock would have fundamentally changed the outcome: The attacker compromises the key and queues the malicious upgrade. The CallScheduled event fires. Within hours, security researchers notice the queued transaction and examine the proposed implementation. They see the custom burn and mint functions hardcoded to the attacker’s address or simply recognize that the upgrade was unexpected and seems suspicious. Alerts go out to the PAID team, exchanges, and the community. The PAID team pauses the contract or cancels the operation before the timelock expires. Users can withdraw their funds or move them to safer contracts if needed. Even if the operation isn’t cancelled, when the timelock finally expires and the attacker tries to execute the upgrade, it succeeds technically but the damage is minimal because everyone had time to react. The difference: $3.16 million stolen versus perhaps zero dollars stolen.
The lowest risk option is no upgradeability at all. Uniswap V2’s core contracts are immutable. Once deployed, the code cannot change. This eliminates upgrade risk entirely. The tradeoff is inflexibility. If a bug is found, it can’t be patched. If a new feature is needed, users must migrate to a new deployment. For some applications this tradeoff makes sense. For others, the ability to upgrade is worth the added risk, as long as that risk is properly managed.
The practical implication for contract developers is that due diligence matters. Before integrating with an external proxy, investigate its governance structure. Is it controlled by a single EOA? That’s a red flag. Is it controlled by a multisig? Better, but still risky. Is it controlled by a timelock with multisig or DAO governance behind it? That’s the configuration you want. Major DeFi protocols like Compound and Uniswap use timelocked governance specifically because they understand the risk profile. If you’re building a contract that will hold significant value, you should prefer dependencies with similar security postures.
The approved implementation registry from Defense Strategy 1 combines well with this approach. A dependency with good governance reduces the likelihood that a malicious upgrade will be attempted. If an upgrade does happen, the timelock gives you advance warning to review it. Your implementation registry gives you the power to decide whether to trust the new version and the ability to prepare your users for potential downtime if you choose not to. The combination of external governance (timelock) plus your own verification layer (registry) plus active monitoring (covered next) creates defense in depth.
Defense Strategy #3: Active Monitoring
The approved implementation registry and governance evaluation are reactive and selective defenses. They help you respond to upgrades and choose safer dependencies. But they don’t help if you don’t know an upgrade happened. Active monitoring closes this gap by detecting changes as they occur so you can respond appropriately.
What you monitor depends on which proxy pattern you’re dealing with. For both transparent and UUPS proxies, Upgraded events are fired whenever an upgrade event occurs. For obvious reasons, you should monitor for those events. You should also monitor OwnershipTransferred events, which could indicate that a malicious upgrade is on its way, as it did in the PAID Network attack.
If the external dependency uses a timelock, monitoring becomes even more valuable. The CallScheduled event fires when a transaction (like an upgrade) is queued, giving you a period of time to react before it actually executes. This is your early warning system. When you detect a scheduled upgrade, you can extract the new implementation address from the event data and begin your review process immediately. By the time the upgrade actually executes, you’ve already decided whether to approve it in your registry.
The response workflow has four stages.
- Detection is automated: Your monitoring system sees the event, parses the transaction, extracts relevant addresses, and fires an alert. This happens within seconds of the upgrade transaction being mined.
- Review is manual: Your security team examines the new implementation. For open-source contracts, this means reviewing the verified source code on Etherscan. For unverified contracts (which – by the way – you should never trust anyway), if you can’t get the official source code it might mean decompiling the bytecode or using tools like Dedaub to analyze the deployed code. As an aside, if you find yourself decompiling bytecode just to identify what a contract is doing, it’s probably an indication that you shouldn’t trust it anyway. If you so desire though, in either case you’re looking for malicious functions, authorization bypasses, or any changes that could harm your contract’s users.
- Decision is also manual: Based on the review, you decide whether this is a legitimate upgrade you want to trust or a malicious change you need to defend against.
- Action is semi-automated: If you approve the upgrade, your chosen authority calls
approveImplementation()on your registry, and your contract resumes normal operation. If you reject it, you might pause your contract, migrate to a different dependency, or simply leave the new implementation unapproved and let your contract remain non-functional for that particular external call.
This workflow is why monitoring is critical. Without it, the first sign of an upgrade is when your contract mysteriously stops working. Users get cryptic “Implementation not approved” errors. You don’t know which external dependency changed, when it changed, or what the new implementation does. You’re debugging reactively instead of responding proactively. With monitoring, you know about the upgrade before or immediately after it happens. You can review it on your schedule. You can communicate with users about planned downtime. You can make informed decisions about whether to trust the new code.
The implementation details of monitoring will vary based on your infrastructure. If you’re running your own nodes, you can subscribe to event logs directly and process them in real time. If you’re using third-party services, options include Alchemy’s webhooks, The Graph’s subgraphs, or Tenderly’s alerting system. The key is reliability. A missed alert is a missed opportunity to protect your users. Whatever system you build, it should be redundant, tested regularly, and have clear escalation procedures for when alerts fire.
Practical Recommendations for Contract Developers
Due Diligence Before Integration
The defensive strategies outlined above work best when applied systematically. Before integrating any external upgradeable proxy into your contract, conduct thorough due diligence on its upgrade mechanism. Start by identifying which proxy pattern it uses. Transparent Proxies have a separate ProxyAdmin contract that controls upgrades, while UUPS proxies have the upgrade logic embedded in the implementation itself. You can usually identify the pattern by looking at the proxy’s verified source code on Etherscan or by checking which OpenZeppelin base contracts it inherits from. Once you know the pattern, find out who controls the upgrades. For Transparent Proxies, this means identifying the owner of the ProxyAdmin contract or a designated list of admin addresses. For UUPS proxies, look at who can pass the _authorizeUpgrade() check, which is typically controlled by an Ownable or AccessControl implementation.
Next, evaluate the security of that control mechanism:
- Check if there’s a timelock in place and what the delay duration is (48 hours is reasonable for most applications)
- Look for multisig requirements and understand how many signatures are needed
- Review the proxy’s upgrade history on the block explorer (frequency, communication, transparency)
- Assess the team’s reputation and security practices (audits, bug bounties, track record)
Implementation Best Practices
Once you’ve decided to integrate the external proxy, implement the defensive patterns discussed earlier:
- Deploy a separate implementation registry contract as part of your standard infrastructure
- Use the
onlyApprovedImplementationmodifier pattern on every function that calls the external proxy - Set up automated monitoring for the relevant events (Transparent Proxy, UUPS, or timelock-specific)
- Document your emergency response procedures so on-call engineers know exactly what to do
- Test your emergency pause functionality periodically to ensure it works when needed
- Be transparent with your users about which external dependencies you rely on
Documentation and Communication
Documentation is often overlooked but becomes critical during an incident. Your documentation should clearly disclose which external proxies your contract calls and at what addresses. Explain their governance model so users can assess the risk themselves. A dependency controlled by a well-known DAO with a 72-hour timelock is very different from one controlled by an anonymous team with a 2-of-3 multisig. Document your own mitigation strategies so users understand what protections you’ve implemented and what residual risks remain. When an external dependency upgrades, communicate this to your users promptly, explain what changed, and confirm that you’ve reviewed and approved the new implementation.
Ongoing Maintenance
Ongoing maintenance is not optional. Monitoring is not something you set up once and forget about:
- Continuously watch for upgrade events on all external dependencies
- Review upgrades when they occur and verify the new implementation is safe
- Update your approved implementations after completing security reviews, including revoking implementations that are no longer approved
- Test emergency procedures periodically as your contract evolves
- Stay informed about new attack vectors and defensive techniques
The security landscape changes constantly, and your defensive posture needs to evolve with it.
Conclusion
Two Problems, Two Solutions
The PAID Network attack exposed two distinct security problems with upgradeable proxies, and conflating them leads to incomplete defenses. Problem 1 is about protecting your own proxies from malicious upgrades through better access controls like timelocks, multisigs, and DAO governance. Problem 2 is about protecting your contracts when external dependencies you don’t control are compromised. The solutions are different because the threat models are different. You control the upgrade mechanism for problem 1 but not for problem 2.
Defense in Depth
For problem 2, which this post has focused on, defense in depth is essential. No single mitigation strategy is perfect:
- An approved implementation registry gives you control over which versions you trust, but trades availability for security
- Choosing dependencies with strong governance reduces the likelihood of malicious upgrades but doesn’t eliminate it
- Active monitoring detects changes quickly but requires operational overhead
The best approach combines multiple layers. Use a registry to enforce your trust decisions, prefer dependencies with timelocked governance to get advance warning of upgrades, and monitor continuously so you can respond quickly when changes occur.
Lessons from PAID Network
The PAID Network case study illustrates why this matters. PAID’s failure was a problem 1 failure: they didn’t protect their own upgrade mechanism adequately. A single EOA with no timelock and no multisig meant one compromised key gave the attacker complete control. If other contracts had integrated with PAID (and many projects do integrate with external tokens), those contracts would have faced problem 2. They would have been calling a proxy that suddenly had malicious burn and mint functions. An approved implementation registry would have protected them. The moment PAID upgraded to the malicious implementation, calls to the integrated contract using the PAID proxy would have reverted because the new implementation wasn’t on the approved list. The integrated contract would have stopped working, but its users would have been protected.
The Trust Problem
The deeper lesson is about trust assumptions. Upgradeable external dependencies create ongoing trust relationships. When you call an upgradeable proxy, you’re not just trusting the code as it stands right now. You’re trusting the upgrade mechanism, the people who control it, the security of their key management, and every future version they might deploy. This is directly antithetical to the immutability that makes blockchain applications trustworthy in the first place. You can’t eliminate this tension entirely. Upgradeable proxies exist because complete immutability is impractical for many applications. But you can and should implement appropriate defenses for your threat model. Your reputation is on the line not just for the code you write, but for the entire dependency stack you build on top of. Understanding the risks and architecting your contracts to fail safely when external dependencies change unexpectedly is not optional. It’s fundamental to building secure decentralized applications.
What are your thoughts on securely using proxy contracts? What do you think of these protection mechanisms? Could they be improved? Comment below!