Sunshine Recipes

Welcome! We're building governance infrastructure.

The software stack was carefully chosen to ensure resilience to capture and aspire towards censorship resistance. The blockchain is implemented with Substrate to take advantage of the cutting edge in distributed network infrastructure.

The network design minimizes on-chain state by storing sensitive data off-chain in ipfs-embed and only storing content identifiers on-chain. Efficient client-side networks enable the secure sharing of off-chain data among the relevant subscribers.

In a world increasingly fraught by data mismanagement (the Cloud), Sunshine stores all sensitive data on local hardware in an encrypted form. This architecture is conducive to modern key rotation mechanisms and client-side computation. It is inspired by Local First design principles.

Go to the splash for high-level details. Now, onwards, to the code 🚀

Pallets

All objects and relationships are in ./utils. Module implementations that conform to the Substrate pallet rules are in ./pallets/*.

To learn more about sunshine-bounty design, check out the high-level overviews for core pallets:

To see more substrate pallet patterns in action, check out the substrate recipes

Bounty Pallet

This pallet placed 3rd 🏆 in Hackusama with a submission that included a custom substrate-subxt client to update github issue information based on changes to chain state.

Post Bounties

Anyone can post bounties as long as the amount is above the module minimum. The module minimum is set in the pallet's Trait.

pub trait Trait {
    ...
    /// Minimum deposit to post bounty
    type MinDeposit: Get<BalanceOf<Self>>;
}

The public runtime method signature is

fn post_bounty(
    origin,
    issue: EncodedIssue,
    info: T::IpfsReference,
    amount: BalanceOf<T>,
) -> DispatchResult

The amount is checked against the module constraints. The issue input is the binary encoding of github issue metadata.

type EncodedIssue = Vec<u8>;

The storage in this pallet uses a map's keyset to enforce a limit of one github issue per posted bounty.

decl_storage!{
    /// Prevent overlapping usage of issues
    pub IssueHashSet get(fn issue_hash_set): map
        hasher(blake2_128_concat) EncodedIssue => Option<()>;
}

The first line in this method checks that the encoded issue metadata has not been associated with an another bounty on-chain.

ensure!(<IssueHashSet>::get(issue.clone()).is_none(), Error::<T>::IssueAlreadyClaimedForBountyOrSubmission);

This global hashset pattern is useful when defining a 1-to-1 mapping between an off-chain identity (e.g. unique github issue) and an on-chain object (e.g. bounty).

Contribute to Bounties

Anyone can contribute to bounties. There are no refunds and there is no representation in spending governance. The only constraint is that outside contributions must exceed the module constant.

pub trait Trait {
    ...
    /// Minimum contribution to posted bounty
    type MinContribution: Get<BalanceOf<Self>>;
}

The public runtime method signature is

fn contribute_to_bounty(
    origin,
    bounty_id: T::BountyId,
    amount: BalanceOf<T>,
) -> DispatchResult

The first line checks the amount exceeds the module constant.

ensure!(amount >= T::MinContribution::get(), Error::<T>::ContributionMustExceedModuleMin);

Apply for Bounty

Anyone except the poster can apply for a bounty. The issue associated with the application (submission) must be unique and independent from the bounty issue to which it is applying. Likewise, the bounty identifier that the submission references must exist in on-chain storage in order for the submission to be valid.

Here is the runtime method header with the checks required for valid submissions.

fn submit_for_bounty(
    origin,
    bounty_id: T::BountyId,
    issue: EncodedIssue,
    submission_ref: T::IpfsReference,
    amount: BalanceOf<T>,
) -> DispatchResult {
    ensure!(<IssueHashSet>::get(issue.clone()).is_none(), Error::<T>::IssueAlreadyClaimedForBountyOrSubmission);
    let bounty = <Bounties<T>>::get(bounty_id).ok_or(Error::<T>::BountyDNE)?;
    let submitter = ensure_signed(origin)?;
    ensure!(submitter != bounty.depositer(), Error::<T>::DepositerCannotSubmitForBounty);
    ensure!(amount <= bounty.total(), Error::<T>::BountySubmissionExceedsTotalAvailableFunding);
    ...
}

If any of these checks fail, the method is still safe because no storage values have been changed. This is demonstrates the verify first, push to storage last principle.

Approve Bounty

Only the account that posted the bounty can approve submissions. Submission approval immediately transfers funds to the recipient.

Here is the runtime method header with the checks required for valid submissions.

fn approve_bounty_submission(
origin,
submission_id: T::SubmissionId,
) -> DispatchResult {
    let approver = ensure_signed(origin)?;
    let submission = <Submissions<T>>::get(submission_id).ok_or(Error::<T>::SubmissionDNE)?;
    ensure!(submission.state().awaiting_review(), Error::<T>::SubmissionNotInValidStateToApprove);
    let bounty_id = submission.bounty_id();
    let bounty = <Bounties<T>>::get(bounty_id).ok_or(Error::<T>::BountyDNE)?;
    ensure!(bounty.total() >= submission.amount(), Error::<T>::CannotApproveSubmissionIfAmountExceedsTotalAvailable);
    ensure!(bounty.depositer() == approver, Error::<T>::NotAuthorizedToApproveBountySubmissions);
    // execute payment
    T::Currency::transfer(
        &Self::bounty_account_id(bounty_id),
        &submission.submitter(),
        submission.amount(),
        ExistenceRequirement::KeepAlive,
    )?;
    ...

Next Steps

This module works for single account governance, but isn't sufficiently expressive for democracy (direct and representative). Future versions will allow contributors to select representatives and vote to approve submissions. See the grant pallet for an example of an on-chain grants program that uses org voting to make grant decisions.

Org Pallet

This pallet handles organization membership and governance. Each weighted group of accounts stored in this pallet has a unique OrgId. This identifier is often used in inheriting modules to establish ownership of the organization over associated state.

Share Ownership

Each member (AccountId) in an org has some quantity of Shares in proportion to their relative ownership and voting power. This ownership metadata is stored in runtime storage like

double_map OrgId, AccountId => Option<ShareProfile<T>>;

Pallets that inherit this pallet can check membership of an AccountId in an OrgId by checking if the map associated with the key: OrgId, AccountId is Some(ShareProfile<T>). There is an associated method for this purpose.

let auth = <org::Module<T>>::is_member_of_group(org, &who);
ensure!(auth, Error::<T>::NotAuthorized);

Default Governance

Every group has a sudo Option<AccountId>. This position is set in the organization state upon initialization.

pub struct Organization<AccountId, OrgId, IpfsRef> {
    /// Optional sudo, encouraged to be None
    sudo: Option<AccountId>,
    /// Organization identifier
    id: OrgId,
    /// The constitution
    constitution: IpfsRef,
}

The sudo is intended to be a representative selected by the group to keep things moving, but their selection will be easily revocable. The rank module expresses representative selection with enforced term limits for this exact purpose.

Sunshine Client

At the core, there is a collection of crates that are shared between decoupled, composable sunshine modules. The core crates handle cryptography, codecs, communicating with the dart vm, the substrate light client and ipfs.

Sunshine modules have a substrate runtime component, a client component and a cli and ffi interface.

architecture.svg

The node, runtime, client and ffi is composed of sunshine modules and lives in the sunshine repo. Sunscreen is our android and ios flutter ui that uses the sunshine ffi to create a mobile first user experience.

Offchain data

Offchain data like user and team chains, chat messages and shared files are stored in ipfs-embed. Ipfs embed is a performant rust ipfs implementation focusing on providing atomic and durable transactions. It is the first implementation to take a database centric design approach. Blocks are reference counted and insertions only succeed if all the referenced blocks are in the store. Blocks are pinned to mark them as used making sure that the block and any of it's references are not garbage collected.

Ipfs transactions are composed of insert, pin and unpin operations. In a first step when executing the transaction, these operations are expanded into insert, insert_reference, remove_reference, set_pin and remove operations. These operations are written to a write-ahead log ensuring that they are atomic and durable. The insert operations are committed atomically to paritydb where the blocks are stored. Then the metadata operations insert_reference, remove_reference and set_pin are applied to sled which is a key/value store used for maintaining and querying the block metadata. Finally the remove operations are commited to paritydb and the transaction is marked as complete in the write-ahead log. When a crash occurs and the log needs to be replayed, the following algorithm is used: If the insert operations didn't succeed, the transaction is aborted by removing the inserted blocks from the db. If the operations succeeded then the metadata and remove operations can be reapplied without adverse effects.

Keybase

sunshine-keybase

Chain

The chain module is a reusable abstraction for building private proof of authority chains using ipfs and using substrate to provide authorization and consensus on the current head of the chain. When authoring a block on ipfs a race condition can occur. Due to substrate providing a total order of transactions only one transaction will succeed in updating the head of the chain, the other client will create a new block on the head of the chain and retry the failed operation.

chain_module.svg

Identity

The keybase identity module manages the user's chain that stores the user key, device keys, password and social media accounts using the sunshine chain module. Private data shared between devices is encrypted with the user private key. When a new device is provisioned a key is generated locally on the device, and a provisioning protocol is used to communicate between the new device and the provisioning device.

keybase-module.svg

Password changes are stored encrypted in the user chain. When a device receives a block with a password change it reencrypts it's local device key using the new password. This ensures that the user only needs to remember one password.

Social media accounts are linked to a chain account, by submitting a proof in the social media profile and on the user's chain. Other users can find the on chain account on the social media page and verify that they are both controlled by the same cryptographic identity. This allows us to use github usernames as aliases without compromising the decentralized nature or security that blockchains provide. While resolving the social media account to an on chain identity requires the service to be online, already resolved identities are stored locally. This means that even if github is offline, transfers to already verified github accounts can be performed.

Finally the user and team keys will be used in other modules to send encrypted messages, share encrypted files and vote to make decisions.

Demo Instructions

To run sunshine-identity locally,

  1. clone sunshine-keybase and build the node in release mode
$ git clone https://github.com/sunshine-protocol/sunshine-keybase
$ cd bin/node
$ cargo build --release

Once it compiles, return to root and run the node in dev mode

$ cd ../../
$ ./target/release/test-node --dev

Use the purge-chain command to purge the database if you need to kill the local chain and restart.

$ ./target/release/node-identity purge-chain --dev
  1. Follow directions on sunshine-keybase-ui README to see the Flutter UI work alongside the local test node. The interface demonstrates functionality for identity registration, password reset, and github authentication (account ownership proofs). Here is a demo video by @shekohex.

subxt

substrate-subxt is a rust substrate client built to interface with the substrate chain. It provides light client support, making it possible to work with untrusted substrate nodes.

It is unique in it's support for writing integration tests, by replacing the light client with a full node. This functionality is demonstrated in sunshine-keybase.

To see these tests in action, clone the repo and run the following commands

$ git clone https://github.com/sunshine-protocol/sunshine-keybase
$ cd sunshine-keybase && cd chain/client
$ cargo test --release

Here is an example of expected output. UnknownSubscriptionId errors are usually OK.

running 3 tests
[2020-10-06T18:28:36Z ERROR jsonrpsee::client] Client Error: UnknownSubscriptionId
[2020-10-06T18:28:36Z ERROR jsonrpsee::client] Client Error: UnknownSubscriptionId
[2020-10-06T18:28:42Z ERROR jsonrpsee::client] Client Error: UnknownSubscriptionId
test tests::test_sync ... ok
test tests::test_concurrent ... ok
test tests::test_chain ... ok

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

   Doc-tests sunshine-chain-client

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

There are more client integration tests in identity/client.

➜  client git:(master) cart --release
   Compiling sunshine-identity-client v0.2.3 (/Users/4meta5/sunshine-protocol/sunshine-keybase/identity/client)
    Finished release [optimized] target(s) in 29.16s
     Running /Users/4meta5/sunshine-protocol/sunshine-keybase/target/release/deps/sunshine_identity_client-d858ac81e954b312

running 4 tests
test utils::tests::parse_identifer ... ok
test client::tests::provision_device ... ok
test client::tests::change_password ... ok
test client::tests::prove_identity ... ok

test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

   Doc-tests sunshine-identity-client

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

Shared Open Problems

The following open problems are shared by the Web3 space. Our ability to collaboratively fund development on critical infrastructure decides the direction of this technology.

Substrate Warp Sync

Without warp sync, light clients lack functionality until they are fully synced. Sometimes this can take a really long time (we've experienced anywhere from a few hours to over a day for short-living, low throughput test networks).

OpenEthereum clients receive a snapshot over the network to get the full state at the latest block, and then fill in the blocks between the genesis and the snapshot in the background. The code is here.

Discussion of Substrate Warp Sync in this issue.

Rust-Libp2p Nat Traversal

src

Nat traversal and firewall traversal is required when peers want to establish a connection to each other. In a traditional server architecture the server gets a public ip like a phone number. Mobile networks or home networks share an ip address, so you can't directly connect to a device that is on a different network. This is done for multiple reasons. Since the ipv4 address space is a 4 byte number, only ~4 million devices can have a unique ip address. Today's number of devices connected to the internet vastly exceeds that amount. But even in ipv6 with a 16 byte address space that allows every device to have a unique ip, the problem of nat traversal will persist. In most cases you don't want arbitrary connections to be opened to arbitrary devices. So in ipv6 firewalls are configured to only allow outgoing connections and reject incoming connections. Techniques used for nat traversal and firewall traversal are and will remain an important part of p2p networks.

Transport Port Reusability

Transports are assumed by libp2p to have distinct listening and dialing ports. This is an issue when trying to add the quic transport or when using tcp ports with SO_PORTREUSE. Without port reuse, nat traversal becomes impossible without a relay. For the first task, changes to libp2p-core and libp2p-swarm will be made as discussed here. The new api will be validated by adding a tcp transport that supports reusing ports and a prototype libp2p-quic crate will be released using this new api. The libp2p-quic crate will live in it's own repo until the rust-libp2p team has the time to review and merge the new transport. Extensive work on the quic transport has already been done by parity employees, but without these api changes it will remain a second class citizen.

Libp2p Relay

Implement the libp2p relay protocol including tests and examples, showing how to use a third party to establish a connection between two peers that cannot communicate because of a local nat or firewall. Deliverables will be a working libp2p-relay crate. The netsim-embed network simulator we developed will help writing automated tests to verify that it functions correctly.

Rust Substrate Client

Substrate is written in Rust for a reason; the requirements for blockchain technology align with Rust's dual promise of speed and safety. These requirements extend to the client and make Rust the most practical language for building high-performance, secure clients.

An efficient Rust Substrate client would be able to subscribe to updates only relevant to the client's authorized account(s). Moreover, a well-designed Substrate Rust client would be able to use type metadata to dynamically decode relevant storage data for user display. Although we're not quite there yet, that's the intended direction of substrate-subxt.

As users of substrate-subxt, Sunshine developers contribute upstream often. The sunshine-keybase repo demonstrates integration of substrate-subxt for the Rust client implementation.