Let’s start by setting up the basic architecture: I want to participate in a Solana hackathon, which requires building a project on Solana. I’m planning to make a roguelike deck-building game similar to Backpack Hero.
Let me turn this into a practical starting plan: first, confirm the current Solana hackathon and development entry points, then give you an on-chain/off-chain architecture, repository structure, and first-phase milestones suited to a Backpack Hero–style roguelike deckbuilder.
Yes—do not rush to put too much logic on-chain. This direction is a very good fit for a Solana hackathon project built around “running core combat off-chain while using the chain for assets, seasonal results, and verifiable achievements.” That makes it much easier to deliver something playable, demoable, and able to clearly explain why it needs Solana within the available time. Solana still treats hackathons as an online launchpad for projects, and entries are expected to be clearly “built on Solana”; the official development docs also continue to recommend starting from existing templates, Anchor, and game examples.
You can define the project in one sentence like this:
Bag of Fate / Packrun
A game that combines inventory management + roguelike card combat: during a run, players collect cards, relics, and backpack items, and the way they arrange their backpack affects combat deck composition, cost, triggers, and progression. Outside the run, key achievements are written to Solana, forming assets, seasonal rankings, and a tradable/composable game identity.
The benefits of this positioning are:
The safest architecture for version one is:
The reason is practical: Solana programs are well suited for state confirmation, asset ownership, and composability, but not for writing every step of a roguelike’s in-run state to the chain. Solana’s official development materials also emphasize watching compute usage and provide game-oriented templates and SDKs.
Compress the design into these modules first:
That is already enough for a solid demo.
You need to make the “why on-chain” argument very concrete.
After each successful clear, generate a verifiable run record:
Writing this information to Solana gives you:
Turn relics / titles / skins into assets
That gives you a clean answer to the question, “Why isn’t this just a normal web game?”
Composable identity
The player’s wallet address is their identity, which can later expand into:
Solana’s game ecosystem pages also emphasize this kind of game-oriented development capability and ecosystem tooling.
For a hackathon, the best choice is one of these two:
Best if you want to quickly build a playable browser demo.
@solana/web3.jsSolana’s official docs clearly lay out a Next.js/React + Anchor path for games and also mention Phaser templates as a good starting point for browser-based games.
Best if you care more about presentation quality, though development may be slower during a hackathon. Solana’s docs list a Unity SDK as one of the community game SDKs.
For your game, I recommend Option A.
Reason: card gameplay + backpack grids + browser demo = faster development.
Anchor is still one of the most common and fastest ways to get into production development on Solana, and both Solana’s official docs and Anchor’s docs place it prominently in their Quickstart / Basics paths.
packrun/
apps/
web/ # Next.js frontend + wallet integration + game UI
game/ # Phaser game core (can also be merged into web)
programs/
packrun_program/ # Anchor program
packages/
sdk/ # TypeScript SDK for calling the program from the frontend
game-core/ # Pure TypeScript combat rules, backpack calculations, card resolution
shared/ # Types, constants, schemas
backend/
api/ # Leaderboard / replay / anti-cheat helpers
assets/
cards/
relics/
enemies/
docs/
pitch.md
economy.md
architecture.md
Start with at least these 4 PDAs:
seeds = ["player", player_pubkey]
fields:
- authority
- username / nickname
- total_runs
- best_score
- highest_floor
- unlocked_flags
seeds = ["run", player_pubkey, run_id]
fields:
- player
- run_id
- seed
- score
- floor_reached
- result
- deck_hash
- relic_hash
- created_at
seeds = ["season", season_id]
fields:
- season_id
- start_ts
- end_ts
- ruleset_hash
seeds = ["achievement", player_pubkey, achievement_id]
fields:
- player
- achievement_id
- unlocked_at
That is already enough for a demo. Do not start by building a complicated economy system.
If you are making it into “Backpack Hero + card roguelike”, it is best to highlight one genuinely fresh core mechanic:
Backpack layout directly rewrites deck behavior
For example:
That gives you three layers of strategy:
That will impress judges much more easily than “ordinary card game + ordinary NFT.”
Recommended demo order:
That is enough to explain within 2–3 minutes that the game is fun + Solana is necessary.
Limit version one to:
Do not touch Anchor yet. First build:
Only do:
init_playersubmit_rununlock_achievementThat is enough.
You can start with this version:
We are building a roguelike deck-building game in the Solana ecosystem. During each run, players manage backpack layout, collect items, and build decks to complete battles. Unlike traditional card games, the positional relationship between items in the backpack directly affects the cost, effects, and chain triggers of combat cards. Solana in this project is used for player identity, run records, seasonal leaderboards, and rare achievement assets, making game progress and highlight moments verifiable, ownable, and composable.
This is a very good hackathon theme.
Because it gives you all of the following at once:
The most suitable next step is for me to break this down into a technical architecture + module checklist + 2-week development schedule.
As for on-chain interaction, should it be designed so that every small battle interacts with the chain, or the whole game interacts with the chain once after completion, or should every action / every card draw interact with the chain?
Let me turn this into a practical decision: compare three levels of on-chain interaction granularity, then give you the approach that best fits a hackathon and can actually run in practice.
Here is the conclusion first:
For the hackathon version, the best solution is to interact with the chain when the entire run ends, or at a few “key checkpoints.”
Do not make every action / every card draw go on-chain, and in most cases do not make every small battle go on-chain either.
The reason is not that it is “impossible,” but that product experience, cost, failure rate, and development complexity all get noticeably worse. Solana fees are low, but every transaction still has a base fee; and even when a transaction fails, the fee is usually still charged. The official docs also explicitly emphasize minimizing compute usage and paying attention to compute budget and priority fees.
For example:
I do not recommend this.
Why:
There is only one case where this makes sense:
When the point of the game itself is a fully on-chain strategic contest, and the on-chain verifiable process is the project’s main selling point.
But that is clearly not the selling point of your theme.
For example:
This is possible, but not the best default choice.
Advantages:
Disadvantages:
When is it suitable?
When “each battle” itself is the unit of competition, for example:
For your current first version of a Backpack Hero–style roguelike card game, I do not think this is the best starting point.
For example:
This is the default solution I recommend most.
Advantages:
init_playerstart_run (optional)submit_rununlock_achievementNot purely a single interaction at the very end, but:
Optionally make a lightweight start_run
Record:
Purpose:
Implement submit_run
Record:
Only put very rare, high-value events on-chain, such as:
This structure is more stable than “submit only at the very end,” while still not damaging the gameplay experience.
Because it sits in the most awkward middle ground:
So unless your natural competitive unit is literally “one battle,” it is not worth it.
For version one, I recommend keeping only these on-chain data structures:
That is already enough for a hackathon demo.
Recommended architecture:
Complete the entire run’s combat and numerical resolution off-chain, and use the chain only for run-start commitment + run-end result submission + achievements / leaderboard.
This is the most balanced choice right now:
My next step could be to draw out the sequence diagram and account structure for this “game frontend + Anchor program + submission flow.”
If we use an all-off-chain run model with only the final result submitted on-chain, then from a security perspective, if someone modifies the drop rate of rare cards in combat rewards, would that be hard to detect?
Let me define the risk boundary clearly first: if the whole game runs off-chain and only the final result goes on-chain, then once a core judgment like drop rate is handled entirely on the client, it is indeed very hard for the chain itself to detect manipulation. Next I’ll break down where the difficulty lies, what kinds of cheating are most realistic, and how to patch it enough for a hackathon.
Yes—and not just “a little difficult,” but:
If the drop rate of rare cards is calculated locally on the client, and you only submit the result on-chain after the run ends, then cheating by tampering with that drop rate is usually invisible to the chain itself.
That is because a Solana program only sees the final submitted result; the program itself is stateless code, and only the state you explicitly pass in and store participates in validation. In other words, the chain does not automatically know whether this rare card really dropped at a 1% rate, or whether you changed your local code so it dropped at 100%.
What you are discussing is a classic issue:
“The game process is off-chain, but the result is on-chain.”
The biggest security weakness in this architecture is not forged transactions, but the invisibility of the process that produced the result.
If all of the following happen on the client:
Then an attacker only needs to modify the frontend JS, local memory, a proxy, or API responses, and then submit a result that looks legal.
On-chain, you can only validate:
But it is very hard to verify whether the drop was really calculated under the correct probabilities. That is why verifiable randomness is treated as its own class of infrastructure in on-chain gaming. Switchboard’s docs state it directly: on-chain randomness is hard, and randomness usually needs to be provided by an external mechanism and then submitted as a single result to the chain; otherwise different nodes generating randomness independently would cause inconsistent state.
If someone changes the drop rate of a rare card from 1% to 5%, they do not need to get god-tier cards every run.
They only need to increase it slightly, and over time the gains are huge—even though any single run does not look outrageous.
That is much harder to catch than something like “submit a score of 99999,” because the latter looks like an obvious outlier while the former looks like lucky variance.
If the chain stores only:
And you do not have:
Then it becomes almost impossible to replay why that rare card dropped in the first place.
If the replay log is also generated by the client, then an attacker can forge the log as well.
So “I have a replay file” does not automatically mean “I can prove the result is legitimate.”
The correct way to understand it is:
The problem is not that the whole run is off-chain; the problem is that you must take the power to generate high-value outcomes away from the client.
For your project, the key issue is not whether every draw goes on-chain, but this:
At minimum, those need to be handled in one of the following three ways.
This is the most practical hackathon solution.
Flow:
Very suitable.
This is the more orthodox on-chain fairness solution.
Switchboard’s Solana randomness docs and tutorials are explicitly built around verifiable randomness and commit-reveal patterns. The core goal is to make sure results cannot be changed by the client at will.
A possible flow:
seed_commit on-chainUse it only for high-value drops, not for everything.
For example:
Do not use it for normal white/blue cards.
This is a compromise:
Suitable for phase two, but not as the first security foundation.
Then the answer is:
Yes, it will be quite hard to detect if someone tampers with the rare drop rate.
It is not completely impossible to detect, but in practice you would usually only catch it through:
That is not stable enough.
I recommend tiering rewards:
For example:
These can be generated off-chain.
Even if someone farms a little, it will not destroy the demo.
For example:
Do not let the client decide them directly.
At least use one of these:
For example:
Their issuance conditions must be controlled either by the on-chain program or by a trusted settlement service.
Fully off-chain
Calculated off-chain
Settled and signed by the backend
Include the following when submitting:
That way, even if someone changes the local drop rate, they still cannot obtain a valid signature, so they cannot turn a forged rare reward into an official on-chain result.
You could say this in a demo pitch:
Normal combat and deck-building happen off-chain to preserve the fluid gameplay experience of a roguelike; however, all high-value drops, rare rewards, and season-related outcomes are generated through a controlled settlement flow and validated by signature checks or verifiable randomness before being submitted to Solana, preventing forged assets caused by client-side drop-rate tampering.
That sounds like something a mature project would say.
“Only putting the run on-chain after it ends” is not the real problem; the real problem is “who decides high-value random outcomes.”
If the answer is “the client decides,” then yes, cheating is genuinely hard to detect.
If the answer is “a server signs it” or “verifiable randomness decides it,” then you can reduce the risk to an acceptable hackathon level.
My next message could directly give you a rare-drop security design diagram, including:
Your direction is correct, and it is currently one of the most elegant decentralized solutions available:
Use a seed to determine all randomness in the run → make the game fully deterministic → let the chain only verify the result
At its core, this is:
Deterministic Roguelike + Verifiable Replay / Proof
A lot of systems inspired by Slay the Spire + fair competition go in exactly this direction.
✅ Yes, it can be done, and it fits your game type very well
❗ But it is not “without flaws”—it has several very important traps:
The biggest issue is not “can it be verified,” but:
👉 how to stop players from cherry-picking seeds / brute-forcing starts / seeing the future in advance
Let me unpack it fully below.
What you want is:
seedrng = PRNG(seed)
Submit:
Verify by:
👉 If it matches → legal
👉 If it does not → cheating
Because:
You can even say:
“All runs are replayable and verifiable.”
That is a major plus in a hackathon setting.
If the seed is generated by the player or is predictable:
And then only submit the “good” run
👉 This is called:
seed grinding / cherry-picking
If the seed is:
seed = current_timestamp
Then the player can:
👉 That destroys the leaderboard.
Because:
seed → all randomness is predetermined
The player can:
👉 This is called:
perfect foresight
And it will:
You said:
“As long as the chain recomputes it, it can determine whether cheating happened.”
That is theoretically true, but in practice:
Because of:
👉 So in practice:
the chain usually does not replay the whole run
Instead, it only verifies:
👉 summaries (hashes / signatures / commitments)
You must guarantee that all of these are consistent:
Otherwise:
👉 The replay result will diverge → false cheating flags
Flow:
commit = hash(secret_seed)
Using:
seed = hash(secret_seed + run_id)
Submit:
secret_seed
The chain verifies:
hash(secret_seed) == commit
👉 The player cannot change the seed midway through the run.
For example:
seed = hash(
player_pubkey +
slot +
recent_blockhash
)
👉 The player cannot fully control the seed
But:
Like Slay the Spire:
👉 Advantages:
👉 I strongly recommend adding this mode.
You have a few options:
Slay the Spire itself is technically predictable too.
👉 Most players will not actually compute the whole future.
For example:
👉 More complex; not recommended for a hackathon
Do not ask the chain to replay the whole run.
Instead, submit:
And let the chain verify only:
Do it off-chain (client / community / verifier)
👉 This is the realistic path.
You must enforce the following:
Math.random()👉 Used for gameplay experience
👉 This is the mode you should prioritize most.
👉 No backend required
Your core idea is:
Turn “randomness” into “determinism,” then turn the “verification problem” into a “replay problem.”
That is a very correct direction.
But you must solve three things:
👉 For the hackathon version:
👉 What you get:
If you want, my next step could be to help you design:
👉 A complete deterministic RNG + drop table + seed derivation structure (with code-level design)
That will directly define the “soul” of your whole game system.
Regarding the on-chain verification part, how would you actually use a seed to verify an entire run? A single run may contain several battles, multiple choices, and even different card choices in every battle. Is full verification really possible?
Let me split the question of “whether it can be verified” into two parts first: in theory, yes, but you should not fully replay an entire run on Solana itself. I’ll first confirm the latest official wording around compute budget and transaction limits, then give you a verification architecture that can actually be implemented.
Yes, it can be verified—but you need to distinguish between two meanings of “verification”:
The core reason is that a single Solana transaction has a limited compute budget. The official docs currently state a default of 200,000 CU per non-built-in instruction and a maximum of 1,400,000 CU per transaction; priority fees are also charged based on the CU limit you request, not the amount you actually use. For a roguelike with multiple battles, many reward resolutions, route choices, and card choices, stuffing a complete replay of the whole run into a single on-chain verification is usually very tight—and often unrealistic.
The practical solution is not “replay the entire run on-chain,” but rather “make the run replayable off-chain and let the chain only verify light commitments.”
You can design the system like this:
That is something you can actually build.
As long as you write the whole game as a deterministic state machine, it can be verified.
Formally, it looks like this:
state_0 = init(seed, ruleset, player_loadout)
state_1 = step(state_0, action_1)
state_2 = step(state_1, action_2)
...
state_n = step(state_{n-1}, action_n)
Here:
seed determines all randomnessaction_i is the player’s i-th actionstep() is a pure functionseed + the same action sequence must always produce the same final resultSo even if a single run contains:
As long as all of them come from the same deterministic rule system, they can all be replayed and verified.
You cannot mix in any local randomness, such as:
Math.random()You should structure it more like this:
rng_master = PRNG(seed)
battle_seed_1 = hash(seed, "battle", 1)
reward_seed_1 = hash(seed, "reward", 1)
shop_seed_1 = hash(seed, "shop", 1)
In other words, do not just let one global PRNG next() forever; instead, derive seed branches by domain/event type.
That has major advantages:
This is the most important point.
You must avoid:
It is best to standardize on:
Otherwise, you will hit a very annoying problem:
Not that the player cheated—but that your own replay disagrees with itself.
Submitting only:
is not enough.
Because under the same seed, different mid-run choices can lead to completely different endings.
So at minimum, you need:
The simplest approach is to encode a run as a sequence of actions:
[
choose_path(2),
reward_pick(1),
play_card(card_7, target_0),
end_turn,
play_card(card_3, target_1),
...
]
With seed + action log, you can fully replay the run off-chain.
Otherwise you hit another problem:
Then the replay may produce a different result immediately.
So every run should bind to:
ruleset_versioncontent_hashdrop_table_hashAt minimum, the chain should store one of these summary values.
Because your run may contain hundreds of state transitions.
And on Solana, both transaction fees and priority fees depend on the compute budget you request; the official docs explicitly say that priority fees depend on the requested CU limit, not the actual usage. So even if you somehow manage to force a replay onto the chain, the cost, failure rate, and development complexity will all look bad.
The more practical issues are:
So the architecture should avoid “replay the full run on-chain.”
This is my top recommendation.
Submit a RunCommit:
Save locally:
Submit:
That is done by off-chain verifiers:
This is “publicly verifiable,” but not “executed step-by-step on-chain.”
If you really want the chain to participate more, you can split a run into segments:
At the end of each segment, generate a segment_hash:
segment_1 = hash(seed_1, actions_1, state_end_1)
segment_2 = hash(seed_2, actions_2, state_end_2, segment_1)
...
Then the chain only receives:
That is more stable than “verify the whole run once” and easier to audit selectively.
You should not define the goal as:
Re-run the whole game inside a Solana program
You should define it as:
Generate a proof package for each run that anyone can replay off-chain, while the chain stores only commitments and result summaries
master_seed
map_seed = H(master_seed, "map")
battle_seed_i = H(master_seed, "battle", i)
reward_seed_i = H(master_seed, "reward", i)
shop_seed_i = H(master_seed, "shop", i)
RunProof {
run_id,
ruleset_hash,
master_seed_reveal, // or a reference to the daily seed
initial_loadout_hash,
actions[],
checkpoints[],
final_state_hash,
final_score
}
Record one at the end of each battle:
Checkpoint {
index,
phase_type, // battle / reward / shop / route
start_state_hash,
actions_hash,
end_state_hash
}
Only store:
That is decentralized enough.
The chain itself will not compute the entire run for you.
Strictly speaking, this is:
on-chain commitment + off-chain verifiability
not “pure on-chain execution.”
A full run can contain many actions, so the log size can become large.
Typical solutions are:
This is the determinism requirement mentioned earlier.
In practice, that is the biggest engineering challenge.
Build this:
That gives you both:
That is far more realistic than trying to make every mode heavily verified on-chain.
The fact that a run contains several battles, many choices, and lots of card decisions does not prevent verification; what determines whether it can be verified is whether you can write the whole run as a “seed-driven + action-driven + fully deterministic” state machine.
But on Solana, the most reasonable implementation is not “replay the entire run on-chain,” but rather “replay the full run off-chain while the chain only verifies commitments and summaries.” That also fits Solana’s actual compute budget and fee model much better.
My next step could be to draw out a RunProof data structure + battle-checkpoint hash-chain design for you.
So what you mean is that the chain only verifies simple things like whether the seed and status values are legal, while actual cheating is determined through off-chain verification. But then I think a leaderboard built this way would end up with lots of cheaters. Even if off-chain verification can prove they cheated, how do you broadcast that back to the chain, modify consensus, and change the leaderboard?
You have identified the key point: if the protocol does not define an on-chain “appeal / challenge / leaderboard rollback” flow from the beginning, then once cheating is discovered later, the chain will not automatically fix the leaderboard. I’ll first confirm the latest official wording around Solana programs, account state, and transaction execution, then give you a “challengeable leaderboard” design that can actually be implemented.
Yes, your understanding is very accurate:
If the chain only verifies whether the seed / format / summary is self-consistent, then it can only stop “obviously invalid submissions,” not “process-level cheating that still produces a plausible-looking summary.”
And more importantly, if the protocol never predefines an on-chain flow for “challenge → judgment → leaderboard update,” then afterward you cannot change consensus just by saying, “he cheated.” In Solana, the program itself is stateless, all mutable state lives in data accounts, account data can only be rewritten by its owner program through instructions, and transactions execute atomically. So whether a leaderboard can be corrected depends on whether you wrote executable correction rules into the program in advance.
So the answer is not “after detecting cheating off-chain, how do you broadcast it back to the chain so the chain automatically changes the leaderboard?” The real answer is:
The most practical way is to split the leaderboard into two layers:
After a player submits a run, it first enters a pending / provisional leaderboard.
At this stage, the chain only checks:
These are the kinds of checks that fit the chain well, because they involve only a small number of accounts and fields. Solana transactions and instructions are executed based on explicitly passed accounts + explicit instruction data; the chain will not magically read an entire external gameplay process for you.
Give each run a challenge window, such as 1 hour, 24 hours, or “until the daily challenge ends.”
Within that window, anyone can challenge the run by submitting:
If the window expires without a successful challenge, the entry moves from provisional to final leaderboard status.
This is not “changing consensus”; it is simply that the leaderboard account itself is still in a protocol-defined non-final state, and later it is legitimately advanced to final by program instructions. In Solana, state is just account data. As long as the program is called and has writable access to the account, it can update that state according to the rules you defined.
Those are two completely different things.
You cannot expect this:
Because once a Solana transaction has executed successfully, the chain will not let the application layer arbitrarily roll it back. What the application layer can do is send a new valid transaction to update your leaderboard account, mark a run as invalid, slash a bond, and recalculate ranking. Transactions are atomic, and rollback only occurs when the current transaction fails—not because someone later disagrees with a previous application-level state.
I recommend structuring it like this.
submit_run()The player submits:
run_idseed_commit or daily_seed_idresult_summaryscorefinal_state_hashaction_log_hashproof_uri or proof_rootbondAfter light validation, the chain:
RunRecord { status = Pending }challenge_deadlinechallenge_run()Within the allowed period, a challenger submits:
run_idchallenge_typeevidence_hash / proof payloadThe chain updates the run status to:
UnderChallengeresolve_challenge()There are two possible paths here:
Only handle cheating types that the chain can validate cheaply, such as:
These can be directly judged by the program:
InvalidMore complex cheating—such as secretly changing drop rates mid-run—is usually not suitable for full on-chain replay.
In that case, your protocol needs a judgment entry point, for example:
They run the verifier off-chain, determine that cheating happened, and then call:
slash_run(run_id, signed_verdict)resolve_challenge(run_id, verdict)So what gets “broadcast back to the chain” is not public opinion—it is a judgment instruction that conforms to your protocol.
What you are worried about is:
Will the leaderboard just fill up with cheaters first, and even if we know off-chain that they cheated, there is no way to fix it on-chain?
The answer is:
If you build an “instantly final leaderboard,” yes, that can happen.
If you build a “challengeable optimistic leaderboard,” then it does not become a fatal problem.
Because in the latter design:
PendingFinalSlashed/Invalidated through a dedicated penalty / invalidation instructionThen you must accept one reality:
There will always be a question of who has the authority to determine cheating in the final step.
Because seed + final summary alone is not enough to let the chain automatically judge every kind of process-level cheating.
At minimum, you must choose one of these tradeoffs:
Advantages:
Disadvantages:
Advantages:
Disadvantages:
For example, later you might build:
Advantages:
Disadvantages:
I recommend separating modes.
Do not build a strong-consensus leaderboard. Only do:
That way, even if someone cheats, they only pollute the display layer, not asset security.
This is where you build your “serious leaderboard”:
action_log_hashOnly issue them from the Final leaderboard:
That way, you concentrate the hardest problem onto a small number of high-value runs instead of requiring perfect on-chain judgment for every run.
RunRecordstatus: Pending | UnderChallenge | Final | Invalid
submitter: Pubkey
score: u64
seed_commit: [u8; 32]
action_log_hash: [u8; 32]
final_state_hash: [u8; 32]
proof_root_or_uri_hash: [u8; 32]
challenge_deadline: i64
bond: u64
ChallengeRecordrun_id: u64
challenger: Pubkey
reason_code: u8
evidence_hash: [u8; 32]
status: Open | ResolvedSuccess | ResolvedFail
bond: u64
LeaderboardEntryDo not hard-code ranking positions as fixed state.
A better approach is:
Final statusPending/Invalid runs do not participate in final reward settlementThat way, you do not even need to constantly “re-sort the on-chain leaderboard data structure”; the frontend or an indexer can derive the final leaderboard from statuses.
From an application-design perspective, this is not “modifying consensus,” but:
updating application state through a new transaction.
Solana’s consensus layer is responsible for:
Your program is responsible for:
So the essence of this is protocol design, not the base chain’s consensus magically rescuing you.
If you do not want a centralized backend, and you also do not want the leaderboard to be full of cheaters:
then do not build an instantly final leaderboard.
Build this instead:
submit -> Pending -> challenge window -> Final -> reward issuance
This is the single most important protocol design line in your project.
Without it, “off-chain verifiability” is basically just a slogan. With it, “off-chain verifiability” becomes a mechanism that can actually affect on-chain outcomes.
My next step could be to define a challengeable leaderboard protocol for you, including the sequence diagram and Anchor instruction design for submit_run / challenge_run / resolve_run / finalize_run.
If we follow the idea of a final leaderboard, I have two questions:
As for the economy part, my current idea is that players start a game and pay gas; if they succeed, they receive an NFT card, most likely common and occasionally rare. Several common ones can be randomly combined into a rare one. Then there could be a separate mode where players battle using these NFTs, with the winner taking one NFT from the loser and destroying the loser’s other NFTs. There is also the earlier daily challenge mode. These online battle parts are not within hackathon scope for now. I want to know whether this mechanism is reasonable.
Let me give you the conclusion first, then help tighten the logic into something that is actually feasible for a hackathon.
No. At least during the hackathon phase, basically not.
The reality is:
👉 So there is one important thing to correct in your mental model:
❌ “Off-chain verifiable” ≠ someone will automatically verify it
✅ It only means “it can be verified if needed”
Yes. If you build a full challenge / bond / arbitration mechanism, it will clearly shift toward GameFi.
And also:
👉 So your instinct is correct:
you should not build a full verification economy system during the hackathon
Instead:
Treat “gameplay experience + asset generation” as the core
The leaderboard is just for display, not the basis of value settlement
What you described is:
- Players start a game → pay gas
- If successful → they get an NFT drop (common / rare)
- Common NFTs can be combined into rare ones
- Later there is PvP using NFTs
- Winners take NFTs from losers
- Losers’ NFTs are destroyed
My direct judgment is:
Your current logic is:
“A successful run → an NFT drops”
If that drop is determined by:
Then the result is:
👉 Players can farm NFTs
For example:
The result:
Your design is:
The winner gets one NFT from the loser + all the loser’s remaining NFTs are destroyed
That is a huge problem.
👉 This is called:
Full Loot PvP
It is usually only appropriate in:
👉 For a game like yours, which is more strategy-card + roguelike:
it does not fit
Right now you have:
But you do not yet have:
👉 In simple terms:
this is not yet a complete economy system
Let me give you the version that fits a hackathon best.
Change it to:
👉 Only the daily challenge mode produces NFTs
Why:
daily_seed + player performance → deterministic drop
For example:
👉 Make it fully deterministic, not probability-based
Normal runs:
👉 This prevents asset farming
Do not start with complicated battle-oriented NFTs
Start with:
rarity = f(score, seed, rules)
Not probability.
Your original idea:
Several commons → random rare
Change it to:
👉 Deterministic crafting
3 commons + burn → 1 rare (fixed)
Or:
combine common NFT attributes → new NFT
👉 Avoid randomness
You can change it like this:
👉 This will not drive players away
Handle it simply:
Or:
👉 Reward only the Top N (manual / soft verification is enough)
You solve all the core problems:
| Problem | Solution |
|---|---|
| No one verifies | You do not need verifiers |
| Drop-rate cheating | Use deterministic rules |
| Seed rerolling | Use a daily seed |
| Economy collapse | Control the issuance entry point |
| Game turns into GameFi | Keep gameplay first |
| Hackathon complexity | Reduce it to a buildable scope |
👉 Do not try to make every run fair and trustworthy
👉 Instead:
Make only the daily challenge mode the single trustworthy asset entry point
That way you:
If you want, my next step could be to design the key part for you:
👉 A full formula for “daily challenge + deterministic drops + NFT mint rules”
Including:
That will directly determine whether your demo feels like a real product.
Your design is very good. My current plan for the hackathon is to finish the single-player mode, a daily leaderboard, and NFT minting for collection only, without PvP for now. About the daily leaderboard:
It can—but the responsibility boundary has to be made explicit.
For a hackathon, yes, that is basically the idea—but one extra sentence is important:
You are not building a “strong-security leaderboard”; you are building a “leaderboard with verifiability.”
Those are very different things.
If, during the hackathon phase, you achieve the following, that is already very good:
That means your leaderboard is not a black box, but something that is auditable.
But that does not mean:
So your selling point should be phrased more like this:
We implemented a replayable and verifiable daily challenge system; the hackathon version provides public verification capability, but does not introduce a complex on-chain challenge economy or arbitration system.
That is honest, and it is also reasonable.
If the NFT is strongly tied directly to leaderboard rank, then the current light-verification setup is not enough to carry that responsibility.
This is the most important judgment.
Because once you have a mapping like this:
Then you have entered a new phase:
The leaderboard is no longer just a display layer; it becomes a value-allocation layer.
And once it becomes a value-allocation layer, you can no longer rely on “theoretically verifiable.”
Because then the following can happen:
At that point, the verification mechanism must be able to actually affect the outcome, not just allow someone to point out a problem locally.
So the answer is:
This is the hackathon approach I recommend most.
For example:
In this design:
That means even if the top few places on the leaderboard are polluted by cheaters, the NFT system suffers much less.
Because the issuance criterion is a threshold, not a relative rank.
That dramatically lowers the burden on verification.
For example:
This makes the problem much harder, because then you must solve:
That clearly goes beyond hackathon comfort range.
Instead, do this:
That is the most stable option.
Because you change the problem from:
Who is the real rank #1?
into:
Did this run reach a clear, explainable threshold?
The first requires strong verification, dispute handling, and finalization rules.
The second only requires:
The complexity difference is enormous.
Even in mature games, if:
then top ranks being polluted is very common.
But in Slay the Spire, that problem is not as fatal because:
The moment you bind NFTs to it, the risk becomes much larger.
So you cannot directly copy the logic of “ordinary games can tolerate a somewhat dirty leaderboard” into “asset issuance.”
I recommend limiting it to this scope:
That makes the responsibility boundary reasonable.
Issue based on achievement thresholds, not rank.
For example:
This still lets you bind NFTs to daily mode, and it is easy to explain:
That is a very good framing.
You are right: the more your score depends on minute in-run details, the harder it is to verify.
So the hackathon version of the score system should follow one principle:
Do not create dozens of hidden bonuses.
Try to derive the score from a small set of clear, replayable variables.
For example, use only:
And avoid too many things like:
The more complex it gets, the more expensive validation becomes.
Score =
floor_reached * 100
+ boss_kill_bonus
+ hp_remaining * 10
- turn_count * 2
+ elite_kills * 50
+ challenge_modifier_bonus
Characteristics:
It can carry display-level and audit-level responsibility, but not final asset-allocation responsibility.
In more direct words:
So your best product strategy is:
Display, social competition, challenge feeling
Commemorative, collectible, proof of achievement
Strong binding between high-value assets and leaderboard rank
That will make your whole system much smoother.
Define the daily NFT as:
Daily Challenge Achievement NFT
not:
Leaderboard Prize NFT
Those two names imply completely different levels of protocol complexity.
The former is something you can build right now, and it is reasonable.
The latter requires a full system for final verification and dispute handling, which is not suitable for a hackathon.
If you want, my next step could be to help you define:
the daily challenge NFT issuance rules + score formula + metadata fields
so the hackathon-deliverable version is locked in one shot.
As for the NFT part, binding NFTs to thresholds really can reduce the impact of cheaters. I’m thinking: could I write data like the seed, actions, status, and so on into the metadata of the minted NFT, so that a cheating NFT could be easily verified as invalid and lose its value? Would that work?
I’ll first confirm the most commonly used way NFT metadata is currently carried on Solana, and then judge where the idea of putting seed / action / status into NFT metadata works—and where it falls short.
It is viable—but you need to separate two things very clearly:
1. Writing proof data into NFT metadata can greatly improve “auditability”
2. It does not automatically invalidate a cheating NFT, nor does it automatically make it lose value
Do not mix those two things together.
You can absolutely bind things like the following to the NFT:
daily_seed_id or seed_commitruleset_hashscorefinal_state_hashaction_log_hashproof_uri or replay_urirun_idminted_atverified_levelThat turns the NFT into a verifiable challenge-achievement certificate, rather than just a picture card.
The common NFT metadata model on Solana today is that an on-chain metadata account stores basic fields and a uri, and that uri points to off-chain JSON; the official docs also make it clear that on-chain metadata usually stores only things like name, symbol, and URI, while extended information goes into the JSON referenced by the URI.
If a cheater can still mint the NFT, then:
score, fake action_log_hash, and fake proof_uri into itSo the real question is not “can this data be written into metadata?” but rather:
Who has minting authority, how much validation happens before minting, and how legitimacy is marked after minting.
This fits your hackathon positioning very well.
You can define the NFT as:
Daily Challenge Achievement NFT
Its metadata is not just:
It also includes:
That way, when someone receives the NFT, they are not just “looking at the picture”; they can trace where it came from.
Instead of only storing raw data, a better approach is to add a field like this:
{
"attributes": [
{"trait_type": "Daily Seed", "value": "2026-04-23"},
{"trait_type": "Score", "value": 2480},
{"trait_type": "Tier", "value": "Gold"},
{"trait_type": "Ruleset Hash", "value": "0x..."},
{"trait_type": "Action Log Hash", "value": "0x..."},
{"trait_type": "Final State Hash", "value": "0x..."},
{"trait_type": "Verification Status", "value": "Self-Reported"}
]
}
Then later, if you build a stronger verifier, you can upgrade the status to:
Self-ReportedReplay-PublishedCommunity-VerifiedOfficially-VerifiedDisputedThat is extremely useful.
In common Solana / Metaplex metadata structures, the chain stores:
namesymboluricreatorsDetailed extended attributes usually live in the off-chain JSON referenced by the uri. Solana’s official docs also explicitly recommend storing the JSON in reliable persistent storage such as Arweave, IPFS, or a stable CDN.
That means:
In other words, metadata can help people verify, but it will not enforce punishment for you.
The official docs explain that whether metadata is mutable depends on the is_mutable / update authority design; if the account is mutable, the update authority can modify the metadata.
So you will face a tradeoff:
Advantages:
Verification Status from Pending to Verified/DisputedDisadvantages:
Advantages:
Disadvantages:
When you say “a cheating NFT can be easily verified as invalid and lose value,” that is only true under one condition:
Your frontend, your community, and any relevant trading contexts must all recognize some shared verification standard.
Otherwise, the chain only sees:
The chain will not automatically say “this one is void.”
So the more accurate way to say it is:
Writing proof into NFT metadata makes legitimate NFTs easier to identify, but it does not guarantee that illegitimate NFTs automatically lose value.
I recommend a two-layer design.
In the metadata JSON, store:
run_idday_idseed_commitruleset_hashscoreaction_log_hashfinal_state_hashproof_uriDo not let the NFT simply declare itself valid.
A more stable approach is to let your game program or a separate registry account maintain:
mint -> Pendingmint -> Verifiedmint -> Disputedmint -> RevokedForRewards (if you need that later)That way:
VerifiedThat is enough.
This is the most stable option.
daily_seed_idscoretieraction_log_hashproof_uriverification_status = "Unverified" or "Replay-Available"That is already very good.
For a hackathon, that is already strong enough.
If you want to go one step further, you can keep an update authority and later change the metadata status to:
PendingVerifiedDisputedTechnically this is feasible, because mutable metadata can be updated.
But I would not recommend making that too heavy from the start.
A simpler hackathon approach is:
This is an implementation detail, but an important one.
Why:
Store only these on-chain or in the metadata JSON:
action_log_hashproof_uriThen:
proof_uri points to the replay file on IPFS / ArweaveThat is the correct architecture.
Not:
Put the data into the NFT, and then cheating NFTs will automatically lose value
But rather:
Embed daily challenge proof into NFT metadata and a traceable proof package, so that legitimate results become publicly auditable; the official frontend and community tools can then distinguish verified achievement NFTs from unverified ones.
That wording is more accurate and more professional.
day_idrun_idscoretierruleset_hashseed_commitaction_log_hashfinal_state_hashproof_uriverification_hintgame_versionReplay AvailableReplay VerifiedThat is very reasonable, and achievable within a hackathon.
Yes, it is viable, and it is a good design—but what it improves is “verifiability and traceability,” not “automatic punishment.”
If you want cheating NFTs to truly “lose value,” you still need your frontend, community standards, or a later verification registry to distinguish which NFTs are Verified and which are merely Self-Reported.
My next step could be to write out an NFT metadata JSON schema for you so all of these fields are locked in at once.