HashPack blog

HashPack INO Overview and Postmortem

March 18, 2024

Yesterday, we launched the HashPack Concierge Collectibles, an event that has been highly anticipated by our community for the greater part of the past month.

On the morning of the launch, the event patch had been successfully pushed to all platforms, the store was primed, and the NFTs were ready in our treasury, waiting to be purchased.

At 12 p.m. EST, the collection went on sale, and by 12:22, we were sold out, showing just how much excitement our community came out with for the event.

At-a-glance Stats

Let’s start with some stats around the mint:

  • Over 7,000 users participated in the mint
  • Over 30,000 unique purchase attempts were made
  • Resulting in 1,750 unique holders at the end of the day
  • During the mint, we made roughly 3 million mirror node calls
  • We were officially sold out after 22 minutes

In the hours after the launch, the concierge collection broke numerous annual records in the secondary NFT marketplace, and our whole team was active, answering questions and communicating with the community late into the night.  

Overall, the data says the launch was successful. However, for those who participated, we are painfully aware that the launch was anything but enjoyable.

In our efforts to provide feedback and keep our users informed on the issues that occurred during those hectic 22 minutes, we made some public statements that ultimately caused confusion amongst the community. We will definitely take some lessons from that, but we also know that our community is looking for more information on what went wrong on the technical side.

In this article, we’d like to provide transparency on what happened with the postmortem of the mint, going over the issues faced, the errors made by our team, and what we can do to improve in the future.

INO Postmortem

HashPack uses two mirror node providers: a primary and a secondary. In the days leading up to the INO, our primary provider started experiencing intermittent issues that caused failures to query data. The primary was what we tested the INO system with; notably, it does not have rate limits in place (a rate limit is the maximum number of calls in a given amount of time you can make to the service). It should be noted that this is very unusual for our primary provider, and we typically receive excellent performance from them.

As the time of the INO approached, the number of requests increased, as did the number of failed requests. As the backup provider was not experiencing these issues, we switched over to that a few minutes before the INO went live.

The overlooked factor when switching to the backup mirror node was that it had a rate limit in place, something that we hadn’t really encountered before, but due to the high turnout and large number of calls the INO required, the rate limit started to affect us. We have handled large mints by other platforms before, but in those cases, a large number of calls are handled outside of HashPack and do not contribute to our rate limits, something we overlooked in our testing.

At first, we didn’t realise this and just saw a bunch of errors being returned, but as the mint went on and we fleshed out some logging, it began to become clear what was happening. A large number of purchases were going through, but it was very hit-or-miss if a specific transaction would succeed or fail.

To be clear, what happened was not an issue with Hedera or the performance of mirror node infrastructure, though it did seem like it at the time. In an effort to keep users informed, we made some statements that, looking back, may not have been accurate at the time but were based on our current understanding of the situation.

The rate limits are in place to prevent degradation of service. This could have been avoided by closer cooperation between the parties, but due to the quick timeline these events occurred on, on a Sunday, there was very little time for collaboration. Generally, both mirror node services have been very solid, but systems always have a bit of downtime, which is why we have a backup system to smooth this out.

It’s unfortunate that issues with our primary coincided with the INO launch in this way. We worked incredibly hard on the INO system, and seeing it get derailed in this way was hard to watch for everyone involved.

Below is a chart of the amount of traffic being sent through our backend, usually hovering around 150–200 req/s at normal load. As you can see, during the peak of the launch, we had over 2,000 req/s – more than ten times our usual load – well exceeding the mirror node rate limit of 1,000 req/s.

HashPack Traffic

Distribution

We are happy with the distribution of the INO. The issues we ran into inadvertently made the mint a sort of lottery system in the sense that it was random who did and didn't get a successful mint.

Data provided by TierBot.

For the HashPack Concierge Collection, the average number of NFTs minted per account was 4.5 NFTs. No account got more than 2% of the supply, with the majority (51%) having less than 0.15% and 80% having less than 0.75%.

HashPack Concierge Collectibles

For comparison, here are some other INO distribution charts:

Other comparable INO launches

Lessons Learned

Over our two years in the Hedera ecosystem, we have facilitated many NFT mints and assisted with the launch of many projects. We’ve had our fair share of successes, but it’s always the failures where we learn the most. Here are some of the takeaways for our team from this event.

Looking back at the hour and minutes leading up to the launch time, when we saw signs that the store and transactions were not acting stable, we should have paused the INO to ensure a smooth mint. However, we underestimated the extent of the disruption. As the mint kicked off, we were in full triage mode, switching from our primary to backup mirror and attempting to fix the issue live. We received a lot of feedback on errors from the community, but community members were also reporting success as well, and there didn't seem to be anything explicitly going wrong with the transactions that were making it through.

Before we knew it, the launch had sold out in 22 minutes, and wallet performance was steadily returning to normal.

Another action we missed taking was contacting our mirror node providers and coordinating the mint. We knew that this was going to be a big event but underestimated the magnitude of traffic that would be generated by this event, hitting a peak of 10x our typical highs. These mirror node providers have been handling mint events effortlessly for the past year, so not directly involving them was our oversight.

Had we brought in ValidationCloud and Arkhia early, we could have had their support staff on hand to address any issues that came up, increase account limits, and take other actions that could have led to a smoother mint.

Closing thoughts

At the end of the day, despite the complications, the HashPack Concierge Collection sale was still incredibly successful and broke many ecosystem records. We are incredibly thankful for the overwhelming support and enthusiasm of our community and are looking forward to continuing to build and lead innovation in the Hedera ecosystem.

We hope that the lessons learned from this event will inform and help other projects in the future, and we certainly will take notes and learn from our mistakes.

It's an exciting new chapter for HashPack, and we can't wait to launch $PACK token in April.

Back to blog