so i the Channel send/recv error happen when we run ping_node to our own address while on an inbound session maybe cos we are creating channels with addrs that we are already using for inbound connections? this is the erroring code https://github.com/darkrenaissance/darkfi/blob/23403f9c8336c55548253bb018b82f7fa280581a/src/net/protocol/protocol_seed.rs#L66 this is the same code but works w/o error since it only runs on outbound connections https://github.com/darkrenaissance/darkfi/blob/greylists/src/net/protocol/protocol_address.rs#L172 gna sleep on it. gn PUPU fn foo() { bar(); foo_body(); } PPUU fn foo() { foo_body(); bar(); } seems the choice is arbitrary since we will have fee calls as siblings, PPUU will be at least 2x faster than PUPU ACTION submits xir recommendation to the committee gm free-module: Fee calls are supposed to be the first call executed, with no dependencies above or below If that fails, we don't have to execute the rest of the tx If it passes, we have to verify the rest And then on the end we see if enough fee was paid The signatures and zk cost is known in advance wasm exec is fast and shouldn't be burdensome with PPUU, if any of P fails then the tx stops, so you can compute them all in parallel even the U's theoretically could be parallel (maybe in the future they will), but i agree it's safer to do them sequential for now (due to unforeseen non-determinism introduced) so PPUU worst case (a call fails) is same as or better than than PUPU, but best case is much better (Nx faster, where N = number of calls, and N >= 2) about it being more rational to dev: the difference is aesthetic, that is, where the child call occurs in the trace (see the foo() examples above) and in fact solidity recommends calls-effects-interactions, so typically solidity devs would have this last anyway I'd argue it's a lot worse Since it doesn't let you create new coins and use then in the same tx yes it does No it does not think of it this way you have the call params, and you know the call will succeed you know the result of that operation a contract should never be reading the DB directory- the contract can provide helpers in model.rs for the params (so external contracts can operate on the params and see: "ah these coins will be created") *never be reading the DB directly sry if this feels like trolling tho, i don't mean to revive dead args, but just stating my case That's a bad idea. What if I have a single input and I want to send you tokens I can only use that input to pay the fee and do nothing else To do your thing we'd have to rewrite all the contracts to learn this new thing you don't have to change anything in the contracts Whereas now I can pay the fee, create a new output, and use that output in a Transfer ah true meh ghey okok i yield It's totally fine You can eventually do paralelism in this case as well Just needs a proper algorithm To identify independent txs well the contracts have to mark it explicitly sorry not the contracts i mean the calls I'm sure it can be done (tx_id, call_idx) it could be risky actually in your fee example, i could double spend the coin You can't because PUPU yep correct blockchain's in general are slow af in prod it will be a chief concern btw there is a flag for DaoProposalMetadata called ended, but when exec() happens, we just delete it from the DB, so i think it's redundant and we can remove it That flag was for ponential time expiration Not for exec *potential for time expiration, we can also delete it from the db That has to be triggered yep, same for the flag ended also in vote we can add a check to prevent voting past the time limit Yeah true, I think I added that to make it a timestamp But didn't happen (so triggering doesn't need to happen) ok i have the time limit as a TODO gm gm DAO test is broken on current master, it was working on 6bb313db3d887dbbf9f6b6eda06c902414c019f5 git bisect cannot detect the bad commit, but says it's one of these: d958bafdf4b08e558dbc3783c49b2ee7226d08c0 9d8fdb0c3ab787a03e11e7d061198f48fb0452d6 bd86ce56783a17d39404e85d462e3d73d6e37b52 11657ca7bbeeb65ad4b51e8958cec2ddecf857d0 so yeah the tree stuff : @narodnik pushed 1 commit to master: 86a117cd38: dao: remove redundant ended, we delete proposals from the DB. getting them will fail after they're ended. weird it fails on the airdrop, yet money::transfer test works fine nvm i guess i didnt rm the .bins again https://stackoverflow.com/questions/11129212/tcp-can-two-different-sockets-share-a-port "there can only be one process per computer listening on a port" so i think when the ping_node to self happens, it blocks the inbound connections on the same port You need to avoid adding self to your peerlist i'll add a timer but from debug output it seems pinging ourselves takes a long time, like 5s or so, delaying the seed protocol brawndo: i will check that, but just noting the ping_node to self is unrelated and happens before we send our external addr to the seed node Where are you reproducing this? using the integration test net/tests.rs ? yes ok I'll have a look later thanks : @lunar-mining pushed 1 commit to greylists: 4e52a00b98: net: remove whitelist_store_or_update call from OutboundSession... : @parazyd pushed 1 commit to greylists: 7dc0b1d489: net/hosts: Add missing mod.rs ah ty fyi in filter_address we are checking it's not our own address, so it shouldn't be this *checking it's not our addr before storing in greylist opps we have our own addrs in the greylist brawndo One thing you're not doing is starting the seed p2p instance yes i'm using lilith aha just helpful to have it in a different terminal let me add an assert for the own addr thing ah yep so there's for sure our own addrs sneaking into the greylist brb b darkfi/src/contract/dao/src/entrypoint.rs:175 if call_idx >= calls.len() as u32 { is this ever possible inside wasm? surely that would be a runtime error if that happened Find the entire path of how ix is serialized But yeah it seems like it would be a host issue validator/verification.rs:358 ok thanks ah we only do own addrs filtering when it's not localhost https://github.com/darkrenaissance/darkfi/blob/greylists/src/net/hosts/store.rs#L377 gealach: You can filter by port ACTION afk now see you : @narodnik pushed 1 commit to master: 53ee11777d: dao: remove money::transfer specific checks in prep for move to auth module : @narodnik pushed 1 commit to master: 5cdafb5238: cleanup money_transfer entrypoint : @lunar-mining pushed 1 commit to greylists: b68ae27935: net: avoid adding our own address to the greylist when on localnet... : @lunar-mining pushed 1 commit to greylists: cda2aeb564: net: clean up reference/ pass by value usage in store.rs : @lunar-mining pushed 1 commit to greylists: 5c4e046908: net: fix tests on store.rs free-module: around? ofc ah well you kinda answered Q already which was whether protocol_seed gets called once, but indeed it get calls again in the PeerDiscovery loop in outbound session just trying to close in on this error which is triggered by send_my_addrs, ping_node in protocol_seed i think it's cos we're blocking the port with the ping_node call and inbound connections are spazzing out we get channel read/ write errors "broken pipe" etc on the inbound connections you said before to add a check like "last_ping_time" or something before pinging ourselves to make sure we're not making multiple redundant calls so maybe it's a good moment to do that better to not add loads of stuff until you understand fully what's going on or where the error comes from fair if you disable the pinging, does everything go back to normal? yes also, if we disable inbound connections, it works fine also, if we only only pinging self to happen when it's an outbound connection (as on protocol_addr) channels work fine but host lists don't propagate s/only/allow what about artificial delays in ping to slow it down? i'll try that doesn't stop the error broken pipe/ connection reset how long is the delay? you can try like 10s for example ok was doing 5 yeah also fine 10s doesn't work either ++ found something interesting and unexpected in protocol_seed, if we don't ping ourselves and instead just send our addr without pinging at all, we get the same error it's only if we remove all send_my_addr functionality that the error stops so it's not to do with ping_node as i thought i'm closin in doesn't happen when we run seed in test.rs instead of using lilith... gtg afk for a bit gealach: pls have a look in p2p, there might be some weird bug that is not quite deterministic gealach: It happens sometimes when outbound_connections=0, also inbound connections stop working, I dunno for what reason There could also always be silent failures with anything using the ? operator tnx brawndo will try these hints unwrap() ftw biab gn gn gm gm gm greets ok so here's my understanding of what's happening with the channel send/ recv errors: 1. when we do a version exchange with ourself in ping_node, the temporary channel that is created is interpreted by the TcpListener as an inbound connection and we create an acceptor and start an inbound session with it. as we destroy the channel after the version exchange, this inbound connection is broken and we get channel send/ recv errors 2. also, in perform_handshake_protocols we also store the channel in the p2p set of channels, which results in our own external addr being stored as a p2p channel 2. is easily fixable by creating a new handshake protocol specific to ping_node that doesn't store the channel in the p2p store not sure what to do about 1 the good news is it's a harmless error as we don't really care about that channel getting destroyed, it's expected behavior however if we could avoid the listener picking up on those temporary handshake connections that would be better To me it sounds like correct behaviour When you ping "self", where "self" is an inbound session acceptor, you should create a channel The issue might be that you're not closing the channel after the handshake For "pinging" nodes, IMO you should still be performing the full handshake including protocol and version negotiation https://github.com/darkrenaissance/darkfi/blob/5c4e04690800d26c201783a8ec7dc7e9f34b8c81/src/net/hosts/refinery.rs#L133 Well for one it's missing in the Err case in that switch yes, i have it added locally Is this issue isolated in a test case? i've just been commenting stuff out from net wdym re: pinging nodes? in the test i'm running, it's just a single node connecting to a seed node, and ping_node only happens on self TODO: make this an actual ping-pong method, rather than a version exchange. I don't think this is needed OK solstice: Is it errors like this? https://termbin.com/zrag I think this is a non-issue It's probably just a local thing Between https://github.com/darkrenaissance/darkfi/blob/5c4e04690800d26c201783a8ec7dc7e9f34b8c81/src/net/channel.rs#L266-L271 and this one being caught: https://github.com/darkrenaissance/darkfi/blob/5c4e04690800d26c201783a8ec7dc7e9f34b8c81/src/net/channel.rs#L284 Just other bg tasks still run Once line 284 is caught by its parent functions, the tasks should be stopped I guess What's the point of protocol_address.rs:201 ? why perform the full version exchange? monero doesn't do it we can just send ping, recv pong and disconnect - lightweight minimal exchange for both parties yep it's the brawndo s/the/that re: error about protocol_address 201, we're doing what monero does, which is pinging ourselves to make sure our address is valid before broadcasting it : @lunar-mining pushed 4 commits to greylists: 8ca4396548: net: call channel.stop() when we get a handshake error on ping_node : @lunar-mining pushed 4 commits to greylists: b673b8c461: net: remove whitelist_downgrade() from outbound_session (monero doesn't do this) : @lunar-mining pushed 4 commits to greylists: 6c0497b862: net: create perform_local_handshake which does a version exchange without adding channel to the p2p store, and use in ping_node : @lunar-mining pushed 4 commits to greylists: e4a684f20e: lilith: comment out broken load_hosts code and add FIXME note yeah i meant about the TODO yes ik i was replying to brawndo he asked about protocol_addrss.rs:201 if we are doing the full version exchange, then we should keep the version numbers so asking nodes for addrs, i can apply a filter >= version (or just keep the full version message) isn't a version exchange dedudant since we will do one anyway on establishing a connection with an external node? s/dedudant/redundant might be useful in the case i just listed it's not redundant since it's info before you connect to a node allows you to select nodes based off policy, or apply other criteria brawndo: i believe the parent function is catching the error, see https://github.com/darkrenaissance/darkfi/blob/e4a684f20ed121deb98255c2ab70416d1ef2fa6c/src/net/channel.rs#L132 afk for a bit hey i'm setting up a codeberg mirror for darkfi, can someone add me to the repo? i'm Gh_4321_pm i mean 0f1VdLINGf@onionmail.org yes, one sec i need your username ah it's Darkieoo i can't see it strange : @narodnik pushed 1 commit to master: 2f80d8ad26: dao: modify proposals so they now just specify a generic call. This is done through an intermediate 'auth' contract. We provide one called DaoAuthMoneyTransfer so DAOs can transfer money around. upgrayedd: can you check darkfi/src/contract/dao/src/entrypoint/exec.rs:101 darkfi/src/contract/dao/src/entrypoint/exec.rs:101 dao_exec_process_instruction() <- i am checking the children of DAO::exec(). Am I doing this correctly? also can you advise me on darkfi/src/contract/test-harness/src/dao_exec.rs:184 : @dependabot[bot] pushed 1 commit to zerocopy-0.7.32: 89407d8dd0: build(deps): bump zerocopy from 0.7.30 to 0.7.32... you construct DAO::exec() and money::transfer() as siblings, but actually I want [(DAO::exec(), [DAO::auth_money_transfer(), money::transfer()])], where those 2 calls are children of DAO::exec() sec i looked at the darktree inside SDK but i'm a bit dense... will look tomorrow morning otherwise brawndo: how can i convert pallas::Base into u64? let auth_call_function_code: u64 = auth_call.function_id.into(); due to the fee call, we have to introduce children indexes in the call params so you dao exec must have the auth call index inside exec I would do something like: for auth_call in params.proposal_auth_calls { if !self.children_indexes.contains(auth_call.index) -> error then do as you do, with child = &calls[child_idx] O_O so the logic there is fine, just add the protection clause to be extra safe, to verify the params how can i iterate over the children of DAO::exec() call? inside the entrypoint? isn't what i did correct yes yeah its correct XD just describing the context in general just add the protection clause, so you ensure the index inside the call params is in bound of the calls indexes ah i dont care, the wasm will fail if OOB err occurs true then its fine so you just need to create a tx where the root is the exec call yep and its children are the other ones i made a little diagram where is that? you dont have to do it right now darkfi/src/contract/test-harness/src/dao_exec.rs:184 yy so add just another tx_builder.append() with the auth call just keep in mind to append in order and should be g2g and when we get fee, it should be: tx_builder = ..., tx_builder.append(fee call leaf), tx_builder.append(auth call leaf), tx_builder.append(xfer leeaf) i dont want to append though and gg i want (auth, xfer) as children of exec tx_builder.append means append a children to the root of the transaction builder tree go check the code comments lol :D sorry it's embarassing, i'm really bad with this stuff hahaha no worries I guess the naming is missleading i got 100% on my algebra exam, but failed discrete math/graph theory since you assume append a new call to the tx but its actually append a new call leaf to current root( aka a new child) but they will all be siblings (exec, transfer, auth) no exec is root i want exec owns (transfer, auth) call tree will be: Tree { call:exec, children: [transfer, auth]} oh nice ok that's easy then just mention fee since then it will be Tree{call:exec, children:[fee,transfer,auth]} which will give calls order: [fee,transfer,auth,exec] (since we want fee at first position) I guess you want auth before transfer no? or it doesn't matter? yes auth before transfer why is fee a child of exec? it should be [fee, exec -> (auth, xfer)] you have single root ACTION wonders if dao transfers can pay their own fee so fee must be first child of root we should change it to have multiple roots sibling calls my initial desing was having fee as the root but that puts it on last position in index while after discussion with brawndo we want it to be on first just make it a sibling and first : @narodnik pushed 1 commit to master: 8ac99e7df3: dao: append auth xfer call ^ is this correct? sibling to what? when i enable this line, darkfi/src/contract/dao/src/entrypoint/exec.rs:113 then it fails sibling to exec so (fee, exec) are 2 sibling calls kinda weird to make fee a child of exec anyway will go to sleep, gn maybe root can be empty then but that breaks the logic : @Dastan-glitch pushed 1 commit to master: bf65de25ad: bin/tau: remove commented/unused code : @aggstam pushed 1 commit to master: fee3385688: sdk/dark_tree: create fn to shift root in flatten vec to first position free-module, brawndo: there ^, problem solved! now the tree has the fee call as root, where all rest calls are its children, and when we build the flattened vec we move the root from last position to first(updating all indexes correctly) so lets say we have dao call the tree will look like: Tree { root_call: fee, children: [Tree{call: exec, children [auth, xfer]}]} and its flatten vec will be: vec = [fee, auth, xfer, exec] if lets say we want to add another call to that tx(irrelevant to exec) it will become: Tree { root_call: fee, children: [Tree{call: exec, children [auth, xfer]}, Tree{call: foo, children[bar]]} with vec: [fee, auth, xfer, exec, bar, foo] so in code it will look like: tx_builder = TransactionBuilder::new(FeeCall::Default, vec![]); tx_builder.append(DaoExecCall) tx_builder.append(FooCall) let vec = tx_builder.build() let actual_fee_call = calculate_fee(vec[1..]) vec[0].data = actual_fee_call something along those lines IMHO having the fee call as root makes the most sense tree wise, as all other calls are its true children calls since all tx calls are "bounded" by a single fee call, so that fee call should be their parent that is really weird the tree captures calling semantics, idk why we're inserting fee into a call graph, it should be a sibling btw upgrayedd, brawndo: i think fee should NOT be restricted on where it goes in a tx, it should occur anywhere 2 examples why: 1. DAO proposal to pay the fee and send money somewhere, then DAO calls money::fee() to pay the fee .etc 2. i have X token, swap some amount to DRK and pay a fee (within the same tx) both of these would increase usability quite a bit, and we get them just by allowing fee calls anywhere in the callgraph to occur upgrayedd: another reason we need multiple roots: atomic txs, i might want to make 2 calls that occur atomically but are not related (for example send money and do a swap) #1 means then you can execute DAO proposals without needing some DRK in your wallet ;) in fact i would just put a field in MoneyTransferParams called fee: u64, and add up all those values in a tx... i'm not sure why a separate call is needed it would be cool if i had a tx but no DRK to pay the fee, and could swap some WHACKD for DRK, pay the fee, and do the other stuff - all atomically otherwise it's more like: swap DRK for WHACKD ... wait ... now do main tx free-module: the call tree just defines the order of execution in relationship with another call it's more than that, it's used inside the wasm logic a call being the child of another call doesn't mean that they must be related, just that the child must be executed before marent it doesn't have to be used inside wasm tho fee is not a child of DAO::exec is it a sibling its its parent *it is no it's not DAO::exec is a separate call read again what I wrote fee is a single call in a tx therefore parent to all that doesn't make sense either why would fee call DAO::exec? brah tree is the relationship order between calls, that doesn't mean said calls must use those intexes internaly you can have 2 complete separated calls being simplings, and none of them use parent data sure, just saying i don't think it makes sense when a dev sees the callgraph, it would be like "why is fee making the calls" 1) What noone making any calls, the graph defines the call order/relationships fee being the root doesn't mean that fee calls the other stuff it means that the fee call depends on everything else inside the tx which is true so it happens last? in normal dfs-post order yeah just make it a sibling sibling under what root? it's easier to understand its not a sibling there is no single root call bruh don't think of it as calls its a tree tree always have a root yeah you can have multiple roots then the fact that we are using the tree to represent calls doent change its meaning you don't need multiple roots root call in dev words is not the same as tree root obviously I'm taking tree root, you are talking root call XD what if now DAO::exec() calls fee? why does fee have to be the root? because it bounds everything else in the tx calls vec also every tx must have a fee call, so it can be the root where everything else hangs off it *can* be, but it doesn't have to be single tree is far simpler that having multiple ones why introduce extra complexity in the tree impl? tree is for ordering and providing indexing for related calls i'll explain what i mean: I make a DAO proposal to do these calls 1) swap WHACKD for DRK 2) pay the tx fee 3) transfer WHACKD to you then fee *must* be a child of DAO::exec, it cannot be the root the call order of that case would be: [fee(payed by the dao), swap, transfer] correct? yes sorry no [swap, fee, transfer, dao::exec] (after flatten) fee must always be first why? as brawndo explained, we need to first check the fee call is valid and suffice for rest calls so we put it in first position for simplicity but if it is the root, then it is last look what my latest push does it introduces a shift mechanism so we put root at first position updating all indexing to shifted values why not allow fee in any position? then DAOs can pay fees it still can pay the fee how? remember we need to swap WHACKD for DRK *then* pay the fee why? the fee paying call is not related to the swap yes it is, the DAO has no DRK to pay for the fee the swap must complete first, then it has DRK to pay fees also fee is a child of exec(), not the root wait wait what parties are involved in the swap? DAO: swaps WHACKD for DRK with a counterparty (Bob) then fee should have been already deducted from swapped drk example we want to swap 100 WHACKD for 100 DRK since we don't have any DRK, that would mean we get a fee call for 1 DRK, and a swap call for 100WHACKED for 99DRK drk is from bob, so bob "pays" the fee and swaps the rest agreed amount does bob pay for the other calls in the tx? you shouldn't have any other calls in that tx why? if we disable these limitations (which don't do anything), then we gain all this functionality i don't recall right now weither or not fee is multiple inputs call if it is, each other call simply adds its fee input to the fee call so its still a single fee call, containing all the inputs for rest calls without caring who payed what the DAO cannot make a fee call unless it is a child of DAO::exec so then DAO::exec would be the root again that call fee has been payed for we are talking different calls sure, i just think restricting fees position and only allowing a single tree root (which is fee) is a bad idea the next calls in that atomic tx consume any of the transfered drk? Vec -> flatten each tree -> join the lists together ^ this should be easy no they don't use any DRK the whole point of the swap is just to pay the fee I'm not saying its not easy, it just introduces extra complexity for not real reason, other than naming schemantics wen mainnet soon, send me your bitcoins free-module: it pays the fee, whats the issue then? the DAO cannot make fee calls unless they are a child of DAO::exec fee cannot call DAO::exec fee calls nothing and noone calls fee fee *must* be a child of DAO::exec tree relanionships have nothing to do with who calls what why fee must be a child of dao::exec? any call made by the DAO must be a child of dao::exec dao::exec does all the checks fee is not a call of the dao its a transaction call every tx has a single fee call so then we cannot use DAO treasury to pay for fees just because of an artificial limitation dao::exec can check fee validity calls[self.parent_index] DAO proposals encode multiple calls all those calls are children of DAO::exec yes end? fee must be a child of DAO::exec, so the DAO can pay for fees using its treasury fee call is omnipresent XD it doesn't have to be a child yo wtf again you are using dev fn calls schemantics Fees shouldn't use inputs with nonzero spend hooks pls stop these ideas why? XD why not put a field in money::transfer() called fee: u64? Because the fee call should be completely separate of anything else and not introduce child calls of its own well upgrayedd is saying *everything* is a child of fee By allowing spend hooks like that you're now introducing a completely new attack vector (it is the root) IN THE TREE jfc The fee call should be as minimal as possible, should be the first call executed in any tx, and should not provide a sandbox-escape-way for any other thing e.g. running spend hooks i think the attack vector is the same as money::transfer() I think we have more pressing work to do i'm not even sure why there's a separate fee call, why doesn't MoneyTransferParams have a field called fee: u64 MoneyTransfer does arbitrary tokens Fee enforces native token i don't get this weird logic where the root of the tree (which should be last), suddenly becomes first wat i don't understand why we cannot have sibling calls, everything must have a single root call i don't know why fee must be the first call, and why the DAO cannot make fee calls to pay for fees using its own treasury these all seem like artificial limitations (so it will be required to have DRK in your wallet to execute DAO proposals) These are security and simplicity limitations. I don't believe the DAO should use its own treasury for this since it can introduce various attacks, e.g. draining the DAO using fee calls. that's for the DAO impl to decide Yes it will be required to have DRK in the wallet. 1. root shift: thats just since it makes much more sense(at least to me) to use fee call as the tree root, since its always present and single, so we shift it to first poistion if that's possible then it's a bad impl of the DAO logic 2. you can have sibling calls, but thats not a single tree, you are saying to introduce 1 tree per call execution, doable, noone ever said we cannot have it, its just more complex 3. fee must be the first call as it bounds aeverything else Why do we have tx fees at all? Maybe I should delete that code DAO can only make a single call per proposal exactly matching the format of the proposal the proposal would include the fee paying calls so why not allow the DAO to pay its own fees? it's not like anyone can arbitrarily call the DAO to make fee calls, it doesn't work that way brawndo: i would delete the fee call, and just have MoneyTransferParams with a field called fee: u64, then this fee is accepted if the tx token_id has a blind of 0, and matches the DRK token_id lol No why? Because I do not think that such function obfuscation should be happening in the codebase. There should be a clear, separate namespace for any functionality a contract provides. making fee a separate call from money::transfer() doesn't bring any security benefit. if money::transfer() is broken, it's the end game The transaction fee payment is not a money::transfer at all. The transaction fee payment should be as minimal as possible and induce the minimum gas usage. It should not be providing a method to call other contracts. This should be done through proper calls that do have the possibility to run spend hooks. ok well why not allow money::transfer() to also pay fees? that way the DAO could pay for fees By keeping the fee call simple and minimal, it forces users to write proper and clean code. Not allowing money::transfer() to pay fees because money::fee() should be used to pay for fees, as it is intended to by its function name. you're essentially saying that no contract can ever pay its own fee, users must pay fees Yes users should pay tx fees. why? why not allow contracts to also pay fees? Because you're introducing the attack vector of allowing arbitrary calls to happen in the fee call. In my opinion, the inputs for the fee payment should be clean of any spend hooks. Also I don't know exactly, but my intuition tells me that such a thing is insecure. i mean keeping fee separate has reasonable arguments i could see the benefit of a fast simple logic If such things were safe and secure, I think most other chains would be doing such things, whereas I do not think they do. but the spend_hook simply says the following call must occur, it says nothing about the fee itself The fee payment also does computing, it is not a free call It should be as small as possible and not induce arbitrary computing which slows down any node computation bandwidth. It is literally a call that has to be in _every single transaction_ No matter what the transaction does. But you know, if you want to change up the code, you should I don't see the point of all this anymore, it just feels like wasting time on discussions Instead I could be working It's literally been an hour well i'm bumping up against this now because i'm working with the tree the tree has nothing to do with this you problem is wanting dao to pay its own fee tree or not, same discussion would have happened the problem is that DAO::exec owns [auth, xfer] but you see then it must be [fee, auth, xfer] so then now it was changed so fee is the root of the tree i was saying it should be a sibling, not a root and by having it as a root and restricting its position, we cannot have it as a child of exec fee shouldn't be a child of anything https://www.youtube.com/watch?v=ZXsQAXx_ao0 it was always going to [fee, ...] instead of doing the weird root thing, why not just enable siblings, and put fee as the first sibling? you mean different trees? yes and join the flattened lists yeah sure we can do that I'm just arguing that it was extra complexity since the fee tree is a single root >tfw paying a fee needs a valid dao proposal so why not make it the total root of the tree? because the root should go last, and if a dev is looking at the tree, he will need to know the extra detail of: "oh yeah the root actually goes first, there is special logic that does that" (we will show this in the tooling, so it will be unexpected) It's not a Merkle tree lol brawndo │ >tfw paying a fee needs a valid dao proposal what's wrong with this? seems quite logical Don't you need to do a DAO::exec() to spend the DAO treasury? yes And doesn't DAO::exec() need a proposal? yes Therefore you need to make a proposal, vote on it, execute it - in order to pay a tx fee. - pay from the DAO treasury you're correct, the user will pay for propose and fee *and vote So you're paying the fee to create a proposal as a user, then you're paying a fee to vote, then you're waiting on a proposal to finish And then the DAO is (potentially) able to pay a fee for a tx Sounds awfully complex compared to just having users pay fees Expensive too idk, i think it would be better if contracts had the ability to pay fees, for example in a single tx, i could do: [swap, fee, X] rather than swap ... (waiting) ... [fee, X] we reduce the ability for people to trade and get in to opportunities quickly Sure, implement it I'm not going to the only reason I don't is because I don't want to step on your toes and ofc i won't go do random things without both of yours consent but if it's a matter of you want me to, then i'll gladly implement fee logic or whatever you ask of me You have my consent I dissagree with contracts paying fees... users should ok I won't argue this further then... also contract arbitarally creating fee calls, it should only be a single one in each tx the tree stuff yeah I will revisit the logic to make it more clear but again its just positional ordering, not a who calls what tree the tree is not executed by the node, purely by the wallet so it's fine to be a little more expensive yeah I will make it so tx use a forest, not a single tree so your first tree would be the fee call(standalone root, no children) and then each distinct call be a single tree where root is the main contract call, and children all dependency calls so in the dao exec it would be like: DarkForest{ trees: [DarkTree{call: fee, children: []}, DarkTree{call: dao::exec, children: [auth, xfer]}] gm correct, thanks a lot with flattened vec: [fee, auth, xfer, exec] ++ : @narodnik pushed 1 commit to master: 4ae9b2607f: dao: fix broken unit test upgrayedd: the shifted root stuff is messing with my unit test, i'm trying to build the tree... if you're seeing this, i will revert that commit : @narodnik pushed 2 commits to master: 79d43e61af: Revert "sdk/dark_tree: create fn to shift root in flatten vec to first position"... : @narodnik pushed 2 commits to master: 46a89b1b09: dao test: manually build the call tree ^ brawndo: how can I convert pallas::Base to u64? I looked in the docs and tried .into() let auth_call_function_code: u64 = auth_call.function_id.into(); https://docs.rs/pasta_curves/latest/pasta_curves/struct.Fp.html actually maybe i will just store the u8, and convert to pallas::Base when I need it, rather than storing pallas::Base Hey yeah you should do the latter makes sense thx : @narodnik pushed 1 commit to master: 5341582ccd: dao::exec(): check the auth children are set brawndo: shouldn't this return duplicate dbhandle if it's already intialized, instead of returning an error If a db is already opened, it is an error to open it again lol i didn't add the link https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/import/db.rs#L154 :) : @narodnik pushed 1 commit to master: a1117f7a0f: dao::auth_xfer(): check sibling is money::xfer() biab brawndo: https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/vm_runtime.rs#L558 first zero can still be part of other payload, not necessary, and not possible, no? Yeah I think there I meant to find the first unallocated part of memory In WASM memory is linear Currently this is just used on initialization so no issues happened, but I dunno if we'll be more extravagant later on i think there is no way to tell, no? memory view only return page size, doesn't keep a counter, or allocated length. I'm not sure https://docs.rs/wasmer/latest/wasmer/struct.MemoryView.html i checked wasmer doc i thought you had some idea in mind i think it's unnecessary Would need some testing Okay Yeah : @ertosns pushed 4 commits to master: 7240222b21: [runtime/memory] test write_slice : @ertosns pushed 4 commits to master: a4666d2549: [runtime/import/merkle] check if buffer is fully read : @ertosns pushed 4 commits to master: 27f123da5a: [runtime/import/merkle] replace assertion by returning an error : @ertosns pushed 4 commits to master: c83ad39e2b: [runtime/vm_runtime] comment copy_to_memory free-module: thanks for reverting, was gonna do it anyway XD whew! ;) will do the forrest stuff, need some time to unpack etc etc nw, this is good for now, i have all i need well its not for dao lol, but for fee to be in first pos yeah we don't need the fee in the unit tests ++ : @narodnik pushed 1 commit to master: ff76f6e834: auth_xfer: grab data from DAO::exec auth spec, do some verification on sibling call. : @narodnik pushed 1 commit to master: 87459ac644: prettify ContractCall and Transaction debug output : @lunar-mining pushed 4 commits to greylists: d2dcbddbc7: net: and anchorlist and minimal utilities. also clarify hosts specific TODOs. : @lunar-mining pushed 4 commits to greylists: 58c8f9124a: net: check whether host is in the peerlist before adding to greylist. also make additional anchorlist utils.... : @lunar-mining pushed 4 commits to greylists: 81177ecdf9: net: add peer to the anchorlist with an updated last_seen when we call p2p.store() on a connected channel : @lunar-mining pushed 4 commits to greylists: ee32865786: net: replace outbound connection loop with monero grey/white/anchor connection_maker()... AFK : @aggstam pushed 2 commits to master: af732f588f: sdk/dark_tree: created DarkForest combining multiple DarkTrees free-module, ertosns: yo please run make clippy on repo root and fix stuff free-module: dark forest added, doing exactly what we discussed in the morning, cheers greets i'm moving a bunch of stuff around, rewriting .etc, will clean up once done ty ser (nearly done) no worries/rush re clippy, just mentioning to not be forgotten :D yessir, won't forget gonna bounce, cu cya l8r : @narodnik pushed 1 commit to master: 3a2c4e1799: auth_xfer: check output coins match proposal : @narodnik pushed 1 commit to master: b8fa27c988: mv entrypoint.rs entrypoint/mod.rs : @narodnik pushed 1 commit to master: f4826555f6: auth_xfer: add zk proof (incomplete) : @lunar-mining pushed 1 commit to greylists: fcf53ebd28: net: cleanup connect loop code reuse by implement connect_slot() method. also prevent infinite loop by doing peer discovery when the hostlist is empty.... : @lunar-mining pushed 1 commit to greylists: be5d1f2dee: net: prevent inbound session channels from being stored in the anchorlist biab : @narodnik pushed 1 commit to master: 7e9bd2946e: finish dao rewrite ^ woo free-module: noice will check it now doing the clippy stuff now don't forget to update the vks/pks hash yes ty free-module: all good with the tree stuff? yep perfect brawndo: yo what was the trick to completely nuke cargo caches rm -rf ~/.cargo ? : @narodnik pushed 1 commit to master: c3587c0c6f: general cleanup, clippy & update VKS/PKS in test-harness : @narodnik pushed 1 commit to master: 6ba9cf2e27: remove trailing whitespace from ZK files b upgrayedd: rm -rf ~/.cargo/registry nuking the whole ./cargo would remove your rustup binaries btw I'm setting up github <--> codeberg mirror ah nice good idea ty dasman: nice thanks : @lunar-mining pushed 1 commit to greylists: 5c0707992a: net: improve outbound_session connection loop logic. : @Dastan-glitch pushed 1 commit to master: e85a49f0d4: add codeberg mirroring CI https://codeberg.org/Xirscz7M2t/darkfi figured i would ask in here but unsure if it is best place to inquire - was thinking that it might be good to work on creating a nice, streamlined visual branding/idenity package for darkfi. standardized stuff like use of fonts, colors, text setings, spacing rules, etc. would be defined, would create a bunch of frontend UI components, that sort of thing. any thoughts? is that your background? there's one artist doing UI stuff now, can share you some of the mockups https://agorism.dev/uploads/tor-test.py if anyway wants to test tor (took me ages to make this lol) you need pysocks library gm hackors : @lunar-mining pushed 2 commits to greylists: c40ce5d335: net: move host selection logic back into hosts/store to avoid insane nesting in outbound session loop : @lunar-mining pushed 2 commits to greylists: ee401e1d2d: net: add save_hosts() and load_hosts() methods and invoke on greylist refinery start and stop : @Dastan-glitch pushed 1 commit to master: bf9eae8177: update target repo url in mirror CI https://codeberg.org/darkrenaissance/darkfi ok trying it out free-module: just saw you build Transaction full manually in test-harnes/dao_exec, noice fixing the tests although dao test fail with invalid zk proof so something not right there consensus and money tests fully pass the dao test works for me, retrying now : @aggstam pushed 1 commit to master: cf1ce28ab8: contract/test-harness: consensus_stake and consensus_unstake use TransactionBuilder properly, updated vks/pks hashes check this commit probably using different vks/pks as I had to update the hashes dao test passes for me i already updated the hashes c3587c0c6fa5879f9b27041ceedfff4ac8550a8b maybe try deleting the ZK .bin files? current code produces diff one perhaps something change can you verify? I deleted my .bins files to check if hashes are correct and got diff ones 14:31:54 [DEBUG] (2) darkfi_contract_test_harness::vks: Known VKS hash: da690bdbf157a3e30abc173c69b74400bb032daf0ce6c0cab4c567fe9f0b361e 14:31:54 [DEBUG] (2) darkfi_contract_test_harness::vks: Found VKS hash: da690bdbf157a3e30abc173c69b74400bb032daf0ce6c0cab4c567fe9f0b361e 14:31:55 [DEBUG] (2) darkfi_contract_test_harness::vks: Known PKS hash: a1c446da1c4df1ef26ff3abc41fe010a90452d9063dbf87789a20cdb900b487a 14:31:55 [DEBUG] (2) darkfi_contract_test_harness::vks: Found PKS hash: a1c446da1c4df1ef26ff3abc41fe010a90452d9063dbf87789a20cdb900b487a git pull rm them ok and then check again those arethe previous ones yes now i get this 14:33:25 [DEBUG] (2) darkfi_contract_test_harness::vks: Known VKS hash: 3f47adca36cd4e17c625d838425793ad7d9ac4ddcc3ed6739add3adb4dfbab8c dasman: can you check the failling codeberg pipeline? 14:33:25 [DEBUG] (2) darkfi_contract_test_harness::vks: Found VKS hash: da690bdbf157a3e30abc173c69b74400bb032daf0ce6c0cab4c567fe9f0b361e 14:33:26 [DEBUG] (2) darkfi_contract_test_harness::vks: Known PKS hash: 0d3fef220868380aeb9146a9530bd4e9cbe3b7c1d05b20bfe5c8335741367bc8 14:33:26 [DEBUG] (2) darkfi_contract_test_harness::vks: Found PKS hash: a1c446da1c4df1ef26ff3abc41fe010a90452d9063dbf87789a20cdb900b487a but i'm going to make clean, and delete all bins ++ removing the whitespaces should made the hashes change so probably didn't checked after that push it should recompile cos make only checks timestamps iirc vks and pks hashes only recompile if you remove them 14:37:17 [DEBUG] (2) darkfi_contract_test_harness::vks: vks.bin da690bdbf157a3e30abc173c69b74400bb032daf0ce6c0cab4c567fe9f0b361e 14:37:19 [DEBUG] (2) darkfi_contract_test_harness::vks: pks.bin a1c446da1c4df1ef26ff3abc41fe010a90452d9063dbf87789a20cdb900b487a same hashes sec let me check perhaps I missed something ++ i did make clean, deleted target/ and vks/pks.bin (make clean also removed ZK bins) : @aggstam pushed 1 commit to master: f44019495b: contract/test-harness: reverted vks/pks hashes free-module: yeah I forgot to clean something those hashes arethe correct ones nice ty it seems codeberg and github are out of sync https://github.com/darkrenaissance/darkfi https://codeberg.org/darkrenaissance/darkfi dasman: ^ checking rn, it seems to be a permission issue so codeberg -> github is force-push i'd say just enable force push if it's an issue, we can disable it, and encourage everyone to use codeberg : @Dastan-glitch pushed 1 commit to master: 59079c7b86: remove blank newline from README so I enabled force push, and it'll work but the name is different, cuz it needed an authorization ah actually naming is right, this ^ is probably a bot thing ok testing try both ways it didn't work Error: Rpc failed : @Dastan-glitch pushed 1 commit to master: c26d340c67: add tutorial on setting up codeberg + tor for darkfi repo we should wait a few minutes I think, their minimum interval is 10 mins idk Also, this is gone: : @aggstam pushed 1 commit to master: f44019495b: contract/test-harness: reverted vks/pks hashes :) aha ic ty tbh I'm concerned about aggstam's commit disappearing i still see it cf1ce2 talking about this: f44019495b : @aggstam pushed 1 commit to master: f44019495b: contract/test-harness: reverted vks/pks hashes oh i dont see that either : @Dastan-glitch pushed 1 commit to master: 47f5ac8bb3: change clownflare owned icanhazip.com to myip.wtf !topic remove memo field from money::transfer() note OR restrict it to N*32 + 2 bytes Added topic: remove memo field from money::transfer() note OR restrict it to N*32 + 2 bytes (by free-module) !topic func_id global map + deploy metadata artifact Added topic: func_id global map + deploy metadata artifact (by free-module) https://zips.z.cash/protocol/nu5.pdf#saplingandorchardinband page 62, "New note commitment integrity" zcash derives the serial from the previous nullifier well not derive, literally just ρⁿᵉʷ = nfᵒˡᵈ sapling did it differently: they include both the coin and the merkle path in the nullifier well the position !topic serial derivation Added topic: serial derivation (by free-module) !topic blockchain consensus forks bug due to missing ACL perms in runtime/util funcs Added topic: blockchain consensus forks bug due to missing ACL perms in runtime/util funcs (by free-module) !list Topics: 1. remove memo field from money::transfer() note OR restrict it to N*32 + 2 bytes (by free-module) 2. func_id global map + deploy metadata artifact (by free-module) 3. serial derivation (by free-module) 4. blockchain consensus forks bug due to missing ACL perms in runtime/util funcs (by free-module) !deltopic 4 Removed topic 4 upgrayedd: in bitcoin, you know the block headers contain a self reported timestamp, which is allowed a certain amount of inaccuracy +/- whereas i don't see that in our code, and instead it seems blockchain time is calculated like this: self.genesis_ts.0 + self.current_slot() * self.slot_time isn't that going to drift over time? i'm not sure a random walk can be trusted to stay at 0 ok gn free-module: pos assumes time corectness so no need to "verify" it. it pow we have https://github.com/darkrenaissance/darkfi/blob/master/src/validator/pow.rs#L209-L217 so we are doing pretty much the same thing as btc accomodating for inacuracy ah that's good. So can I use get_blockchain_time(), right? I'm guessing the function wasn't yet changed, but will be updated soon. : @Dastan-glitch pushed 1 commit to master: 62df0d2c41: money::xfer(): remove wrong TODO and expand surrounding comment to add context cosmos puts the vote start/end time inside the proposal so i will put the duration of a proposal inside each proposal rather than a global DAO param (lmk if that sounds fine or not) ++, make sense. brawndo: hey, so some quick questions Hey i want to implement verifiable encryption for money::transfer() notes there's kind of an attack for the DAO where you can create an unspendable coin aha yeah there is a memo field which is variable length should we remove it or make it pallas::Base? or 2 pallas::base .etc We make use of the memo field arbitrarily, but now there is only one usecase - the atomic swaps could it be a URI maybe where to find the data? For example we keep a secret key related to the swap in the memo field so could it be a single pallas::Base value? But we don't use that field anywhere else Yeah it can be a pallas::Base indeed Dunno if also having 2 is a good idea, maybe for some future usecase Or Just leave the memo field outside of zk i think if it's >2 then it should be a DHT URI aha yeah good idea! As arbitrary data that can be included in a note But not verifiable btw i need to change the symmetric algo Which one? cos i'm not sure the AED thing is workable inside ZK (can look), my idea is just adding blinding factors to each value Ah did you see sdk/src/crypto/note.rs ElGamalEncryptedNode? yep Does that not work? sorry i mean chachapoly i can look how it works, but that is probably a byte stream cipher It's probably not usable in ZK whereas we now have an array of [pallas::Base] values, so we can use the diffie hellman seed to derive a series of blinding factors then apply those to the value array v₁ + b₁, …, vₙ + bₙ The thing also is you won't be able to commit to scalar field values (e.g. value blind) where bᵢ are the blinding factors derived from the seed Unless we convert them mod p instead of mod q ah ffs ok i will look more at that But I do not know if that is safe even it is safe if within range There's probably reasons that the scalar field is used instead of the base field for these kind of things scalar field is used because of EC arithmetic inside the circuit (can go into detail another time) I mean for pedersen commitments for example ah you think it doesn't matter if we do the mod_r_p trick? aha ic I'm not sure would need to think about it since we decrease the range of the field This is something we can ask an expert but should be ok i reckon i'll look at this more carefully ok then next thing is about the serial derivation https://zips.z.cash/protocol/nu5.pdf#saplingandorchardinband page 62, "New note commitment integrity" zcash grabs the serial from the previous nullifier, literally just ρⁿᵉʷ = nfᵒˡᵈ sapling did it differently: they include both the coin and the merkle position in the nullifier (coin being spent) so there is still a random value for the serial, but we change the nullifier the nullifier includes some extra info like the coin being spent, so each nullifier is unique per coin since you can create duplicate coins, they also include the position of the coin within the tree (which we have in the burn proof) Uint32 leaf_pos, this one aha probably pos is sufficient by itself, but adding the coin is also extra safe... it's a small change to burn_v1.zk I think it is a good idea to do ok i can fix this It also saves us from spammy unspendable coins lastly is about the funcid in the db !list Topics: 1. remove memo field from money::transfer() note OR restrict it to N*32 + 2 bytes (by free-module) 2. func_id global map + deploy metadata artifact (by free-module) 3. serial derivation (by free-module) Thanks a lot so without this change, we have to modify the coins to include a new param which is function_code i guess cos zkas doesn't have types, i'm using (contract_id, function_code) everywhere or we could build some metadata with the deploy that includes the number of functions inside a wasm then it builds the unique addrs for them should i add this? or what's the best approach? Can we defer this one for later? I need some time to think about it. Perhaps it is possible to generate special entrypoints at deploy-time and then the wasm runtime could be very explicit in what it is calling. Then you wouldn't be using the `match func_id` statement to branch into functions But rather the runtime would know what to run ok sure, but we need to remember that the spend_hook in coins right now is vulnerable because the function_code is not being checked Do you understand my idea here? yeah that would be cool, but i think the exporting zk bins is kinda weird right now it bloats the size of the wasm, maybe nbd but maybe things should be bundled outside the wasm too like metadata (author, version, idk) anyway all looks good tyty The size of the wasm does not matter at all does it not affect the loading or running time at all? Bundling zk bins inside wasm I think is good because then we make that the only way to put circuits on chain And we can use special functions (like zkas_db_set) to verify them and create verifying keys There'd have to be a lot more machinery if we somehow did this outside ok yeah, also we need to do more benchmarking of wasm stuff in general... i've been meaning to ask upgrayedd how to run wasm funcs independently for benchmarks cos right now a lot of "wisdom" is just vodoo :D 11:34 Can we defer this one for later? I need some time to think about it. Perhaps it is possible to generate special entrypoints at deploy-time and then the wasm runtime could be very explicit in what it is calling. lol ok tyty I think we should look into this To generate some kind of table on deploy time hmm i'm not sure it's possible since the entrypoints are encoded in a section of the wasm the exports Yeah we could have a big macro https://agorism.dev/uploads/foo.wat (see the bottom) Like now, we do the 4 sections But we can expand that perhaps Needs some thought, but I wouldn't rush the func_id stuff just yet well i imagine we will build some utility or widget on top for users right now everything is quite low level i'm using (contract_id, function_code) everywhere ^ But doing this seems relatively fine too, but it is a bit loose yeah it's prone to error Loose in the sense of re-deploying a contract, it can change the function well that's on me for trusting the contract owner Yeah true zkas could do with structs, functions and imports anyway dont want to complicate things In the smart-contract repo on github, it uses Nix to have reproducible builds So wherever you build, the wasm will have the same hash You could use that to verify on-chain contract contents aha great like a lil bundle it would be great if wasm types worked seamlessly with zkas types Like what? if we have such metadata in the wasm, it could maybe enable finer details that interoperate these subsystems ah perhaps, we can think about it for example the DAO proposals have a bunch of fields, if i change it, then i have to update: src/model.rs, all the src/client/ builders + the witness tables, the dao-propose/vote/exec/auth_xfer.zk proof files, and src/entrypoint/ get_metadata() (if anything is public) it's a lot of work and often a random zk proof will fail cos you forgot something random and there's zero info why or where it failed then i call export_witness_json() and try it with zkrunner to debug Yeah that's the low-level tooling :D so when i say (contract_id, function_code) tuple is error prone, it's actually related to zkas not having functions/structs/imports (and also wasm/rust types not interoping with zkas) *nod* then the rust type could generate it's zkas struct, and it could just be imported into zkas anyway this is all good now tyty happy darkmas okay Thanks, you too :) grav-mass ACTION is with a fast pc, what is better than compiling https://stallman.org/ *throws offering* https://stallman.org/grav-mass.png cya later o/ in #philosophy, we were discussing that christmas is actually a pagan festival nice, cya l8r g8r !list Topics: 1. remove memo field from money::transfer() note OR restrict it to N*32 + 2 bytes (by free-module) 2. func_id global map + deploy metadata artifact (by free-module) 3. serial derivation (by free-module) !deltopic 1 Removed topic 1 !deltopic 2 Removed topic 2 !deltopic 3 No topics : @narodnik pushed 1 commit to master: bb2769a207: dao: proposals now have a duration, votes are not allowed past the expiry time free-module: imo dao should use epochs rather than days like we do in staking greets! dasman is worried one of your commits disappeared, can you check? cos codeberg is mirroring on github now why are epochs better than using the blockchain time? it was the vks revert but it doesn't matter as they got updated anyway epochs are deterministic with no chance of drifts as they derive from blocks count so for example lets say proposal duration is 10 epochs with 10 blocks per epoch actually epoch is vulnerable to drift after 100 blocks votes stops yeah but the UI for the user is in time and everyone is able to correctly do that with no needing time so epoch * block_time but epoch times can drift well the client can obviously derive it via days the blockchain time doesn't have drift, but it is inaccurate inaccuracy is better than drift imo actually i'm wrong, epoch * block_time doesn't drift since each block is independent yeah its not a drift, its more like variable duration of epochs I know that time is the best ux, but epochs are far more reliable in terms of accuracy for computation how long is an epoch btw? now we have it arbitaraly at 10 blocks iirc aha, and a block is 1.5mins? yy the target is ~90secs but obviously depends on mining difficulty in pos its exactly that, hence why you depend on correct clocks ty will update that now so is using TimeKeeper::blockchain_timestamp() bad because it will change? self.genesis_ts.0 + self.current_slot() * self.slot_time ok afk bbl that commit disappeared because force push from codeberg to github, both were synced except for that commit, when a commit pushed to codeberg it forced github to be synced with codeberg shouldn't happen again I hope -.- : @Dastan-glitch pushed 1 commit to master: 3cc972c780: bin/darkirc: add tiny test bot free-module: no thats fine, since its deterministic and same for everyone same in terms that time drifts don't matter, everyone will see that timestamp when on that height/slot back i think the meet is tmrw hi it's cancelled today? !list Topics: 1. func_id global map + deploy metadata artifact (by free-module) !deltopic 1 Removed topic 1 yeah a bunch of people DM'ed me upgrayedd: i think time functions should not be allowed during update() phase of the wasm since they are used in verification logic (it's like a read op) hello? hi https://oberon-lang.github.io/2023/12/25/towards-concurrency.html hey gm everyone was the meeting moved to today? do we need one tbh? i'm gud dnt mind either way nxt week is fine same ++ gm 11:52 do we need one tbh? "Agile devs hate him" failed devs become gurus : @parazyd pushed 2 commits to master: 001bdecb53: runtime: Remove unused acl_deny function : @parazyd pushed 2 commits to master: 07cb4e0ad7: chore: cargo fmt : @parazyd pushed 2 commits to master: 9270910db0: chore: Minor license header fix : @parazyd pushed 2 commits to master: c4afe20f92: validator: Remove false comments nice, it's syncing with codeberg did you guys see https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html ? v v cool Yeah it's nice I didn't know you can redeclare socket.socket like that and have requests pick it up Although instead of Python, maybe it's easier to just `torsocks curl https://myip.wtf/text` or `curl --socks5-hostname 127.0.0.1:9050 https://myip.wtf/text` ah the curl command is nicer, let me update that It's usually annoying when you have to copypaste and then execute a script yeah ofc I have a design question regarding tx fees What if we limit it to 1 input and 1 output and then make it a free call? It could be abused for free transfers though i was thinking of something similar re: fees What was it? i mean just a single input/output proof ah : @Dastan-glitch pushed 1 commit to master: 4984269c0d: book/tor contrib: replace python tor script with curl socks5 command i think in practice fees are quite low, and dust never gets that small... although there is the edge case where it does so wallets would have to be smart enough to figure that out in terms of UX, it could result in delayed txs while the wallet is consolidating coins (unless fees were allowed in any position ;) ) Yeah let's see I'm building the initial client API just to get it working : @Dastan-glitch pushed 1 commit to master: 8fbd26e0cc: contrib: add todo for zk creds tutorial : @Dastan-glitch pushed 1 commit to master: bcce8f516b: book/contrib/tor: remove python-pysocks ref upgrayedd: Could you implement something that lets us convert from Transaction to TransactionBuilder please? yeah, but whats the usecase? I want to be able to use TransactionBuilder::append() on an existing Transaction : @Dastan-glitch pushed 1 commit to master: 73d92bdd9d: contrib/tor: add section about removing github origin brawndo: you can also do it manually I'd prefer properly darkfi/src/contract/test-harness/src/dao_exec.rs:206 ok brawndo: can you give a scenario for this? because how I see it you use TransactionBuilder all the way and "finalize" to a Transaction let gas_used = verify_transaction(&tx); let fee_call = make_fee_call(gas_used); tx.append(fee_call); yep normally the flatten is the last step when it is sent to the node maybe verify there should take the flattened tx temporary, but you keep the original tree and append to that yeah you can produce flattened tx whenever while keeping the builder intact ok can do that I guess Thanks free-module, upgrayedd: So what do you think re: making the fee 1 input and 1 output explicit? I suppose if we don't enforce fee as calls[0] wouldn't that mean that the user must have an exact token with value=fee? Then you can utilize Money::Transfer to create that coin upgrayedd: No, that's what the output is for where is the change? (as in money change) The deeper I'm getting into the fee code, the more it seems that 1 input 1 output is a good choice upgrayedd: In the output wait I think I'm missing something yeah it would be a quite simple and fast proof, the only issue is coin consolidation which is a corner case but makes wallet impl more complicated lets say I got 1DRK and fee is 0.5DRK for example lets say i want to do a tx but don't have a single coin available of sufficient value free-module: We were kinda talking that it's better to move complexity to client than core 1 input is the 1DRK and output should be 2? the wallet needs to be smart enough to pause, consolidate tx, then send the original free-module: yeah don't build self paying txs and some txs might expire so it would need to be remade again .etc upgrayedd: Input would be 1DRK and output would be 0.5DRK (the change coming back to you) upgrayedd: The fee is public and doesn't create a coin ahh its minted by the miner? wdym don't build self paying txs? upgrayedd: Yeah can be used in any way ok then yeah 1in 1out makes total sense if you can allow fee in any position, then you can bundle a coin consolidation with a fee tx 15:16 for example lets say i want to do a tx but don't have a single coin available of sufficient value free-module: you said you don't have a single coin Yeah that's what I'm saying 15:13 I suppose if we don't enforce fee as calls[0] thats a self paying tx 15:13 Then you can utilize Money::Transfer to create that coin aha yeah OR you could just allow money::transfer[0], fee[1] or fee[0] yeah that would make more sense so consolidation + fee becomes an option instead of having to look through the vec for the fee uneeded compexity in the dao we use sum(inputs) = sum(outputs) in money transfer implicitly quite a lot so having fees separate makes the logic there easier well yeah but its a kind of edge case compared to normal txs brawndo: do you agree with having a fee or a consolidation fee call at tx.calls[0,1]? i kinda like fee in any position though because why restrict it? it's not special, but idk maybe i'm wrong - i haven't much experience in this area because it gives a ddos case huge vector with fee in last position node have to go through every element to find the fee ok undeeded check while doing: i'm imagining a contract that pays out salaries periodically. any employee can just trigger the contract and it pays for itself It is kind of an artificial limitation, true if tx.calls[0] != FeeCall && !(tx.calls[0] == Transfer && tx.calls[1] == FeeCall) is much faster ok np (although the contract could have a mechanism to account for the fee somehow in its logic, but it needs to do the fee calcs inside wasm) nope XD :D btw the .data.data thing is kinda annoying. is there some way to improve that? i was thinking maybe * Deref trait or something free-module: where? and we call the leafs 'nodes' and the calls are calls whenever we go through the tree welcome to everyone using whatever terminology they like XD let parent_idx = calls[call_idx as usize].parent_index.unwrap(); let exec_callnode = &calls[parent_idx]; let exec_params: DaoExecParams = deserialize(&exec_callnode.data.data[1..])?; callnode.data.data well thats two entirely different structs tho that happen to use same name imho better to be explicit... yeah just saying maybe there's a better name or the first .data could be Deref'able ++ no traits ok well maybe we could just make a nice function to wrap this The former can be a function yep btw I still don't get the fee position problem I mean other than dev ex, how it can be used? since each tx has a single call fee which one? needing transfer? when you said that fee could go to any position in the vec I mean whats the real use case it solves, other that devex ah i mean having a contract with a treasury paying its own fees well still in that case it should produce a single fee the user doesn't need DRK to call the contract for the whole tx so it should be able to chug it in the first position yeah sure in this specific case, it would work actually then problem solved I guess? :D fine, i'll try to come up with more exotic examples to break this ;) sure, you know we need that for battle testing anyway so the more obscure the better here's one: satoshidice actually no nvm : @parazyd pushed 3 commits to master: ade30c9071: contract/money/fee: Force 1 input and 1 output, and use own ZK circuit. : @parazyd pushed 3 commits to master: 39879c3b1a: contract/money: Clippy lint : @parazyd pushed 3 commits to master: 5f1f754524: contract/money/fee: WIP initial fee client API implementation Hello. Does nym run or compile on riscv yet? krackattak: hey, this is not a nym channel, lol (last time I checked no) brawndo: for fees you don't need value commits oh wait you do, well it could be one thing (rather than 2) also we can just check the token_id is correct directly in ZK, we don't need to export it and we can check spend_hook/user_data are both 0 wait actually nvm on the last thing dasman: it's been 30 mins and there's a commit on codeberg that didn't get mirrored to github yet gm afk cya : @Dastan-glitch pushed 1 commit to master: 68f938cdb2: dao: replace get_blockchain_time() with get_current_slot() free-module: error rpc failed, idk what's that, I triggered it manually Use get_verifying_slot instead of get_current_slot Also using slots at all is wrong since we're PoW So I advise fixing 68f938cdb2 properly biab b : @lunar-mining pushed 3 commits to greylists: f1e2546ea7: lilith: remove load and save host functionality (made redundant by greylist upgrade) : @lunar-mining pushed 3 commits to greylists: 9eaf2f14cd: net: read hostlist path from Settings. Define a default setting and allow overriding in config : @lunar-mining pushed 3 commits to greylists: 9a674995b6: lilith: add hostlist path to NetInfo and default config b hey, when do u expect to launch mainnet? do yuo pln to switch fro pow to pos at a later point or stay with pow? lain: do you mind adding mirroring CI to greylists branch and push those three commits again since the CI is only on master branch, only a commit to master would trigger it also a resync happens every 10 minutes, and force pushes github to stay in sync with codeberg maybe that's a wrong approach but adding the CI to your branches would make things right ah hmm, well i'll be pushing everything to a new branch and making a pull request for it soon how do i do the CI thing? just copy-paste this should work: https://github.com/darkrenaissance/darkfi/blob/master/.github/workflows/codeberg-mirror.yaml gm gm upgrayedd: 68f938cdb235713087fd1cefbb1d2445390f65f1 freem: check what brawndo said gm ok checking so should i put it back to get_blockchain_time() or use get_verifying_slot()? verifying_slot ok Hey but isn't get_verifying_slot also wrong in PoW context? btw about putting (coin, path) inside the nullifier, we need to convert Value to Value but i was looking at the merkle chip code, and it just does this: let pos: Value<[bool; PATH_LENGTH]> = self.leaf_pos.map(|pos| i2lebsp(pos as u64)); it doesn't even enforce that the array of bools == self.leaf_pos in ZK maybe we have to copy the merkle chip and modify it (if we had the array of bools, we could witness them into a single pallas::Base value, but they aren't exposed) another option: a globally incrementing value for every new coin created maybe this is the better option, we'd need to add a new value in the coin commit, and keep this ticking counter in a money DB Why would you put the Merkle path in the nullifier? You have the coin's leaf position, it's always an unique integer i'm saying the position, not the path it is Value which is 32 bits that get converted internally to 32 bools, but we don't have access to this array in the merkle chip git clone --depth=1 https://github.com/zcash/halo2 && vis halo2_gadgets/src/sinsemilla/merkle.rs go to line 134 open halo2_proofs/src/circuit/value.rs, line 97 the map is not enforcing the constraint inside ZK, so there's no guarantee the Value corresponds to the Value<[bool; 32]> so whatever we do to Value, it isn't guaranteed to correspond to the leaf_pos used inside the merkle chip unfortunately I'll look later Busy atm np, just putting out the idea of a global counter inside the coin as an idea lmk later or tmrw and i'll add that well it would be hash(coin, counter) actually in the mint phase done inside wasm (not ZK), but then the burn proof will unpack the counter and use it in the nullifier I'm not sure if that is a good idea Let's see rather if we can convert the u32 to Fp ok They're constraining something on line 149 The map doesn't constrain, yeah I can just ask them how to do it tbh brawndo: slot = block height so its just different naming scheme perhaps we should update the sdk stuff to be more specific *nod* can i change value_blind in money from Scalar to Base? Then we can use ec_mul_base instead it's a bit slower but the notes are verifiable I don't want to change that before consulting an expert fyi zcash doesn't have verifiable enc for notes, so you can send someone a coin they cannot decrypt Yeah I know and that's fine i don't see why it's less of an attack than the serial If you cannot decrypt a note, it's not a coin for you same with the serial, you discard the dust coin The idea is that notes can be shared OOB (always spend the coin with the highest value) The serial can be solved in an easy way and should be fixed yeah but then it's problematic because with a DAO proposal i can send you an unspendable coin Using a smaller field is not something we should loosely play around with, we don't know the implications we can also fix this in an easy way, it's fine making the blind a Base value, just slower That's why we should ask an expert What's wrong with asking a cryptographer first? you can ask but i'm certain it works since the generator forms a group , and each value nG is unique so the range of possible points is determined by n, which is used as the scalar field for vesta curves which are also secure Wasn't there also a different way to have the DAO use verifiable encryption? If we enforce verifiable encryption everywhere, it's really gonna slow down the chain bandwidth (i.e. every tx would have to do it, because fees) ok i'll think a bit and come up with other solutions That's why I'm thinking we don't really need it for basic payments It's rather a bit more specialized for certain protocols ah yes we don't need it in money actually if i have a way to split a Scalar s into 2 values (a, b) which are pallas::Base such that a + b = s, it would work Yeah you could do that with proper type conversion (i could do these checks in DAO::auth_money_transfer()) ok nice mod_r_p(a) + mod_r_p(b) = s ok i know what to do ty Nice btw i'm not sure we need the value_blind and token_blinds in MoneyNote actually we just need the CoinAttributes there so we can unpack the coin : @Dastan-glitch pushed 2 commits to master: 43bdf5eab9: money::transfer(): MoneyNote add TODO about removing value_blind and token_blind : @Dastan-glitch pushed 2 commits to master: 7e6a60bfc0: dao: s/get_current_slot/get_verifying_slot/ Yeah dunno, perhaps it's useful to have there in order to be able to reproduce the note/tx where it was made? Is there a case where that is needed even? : @parazyd pushed 1 commit to master: 8f6d404ce0: contract/money/fee: Reorder some state transition code more logically no tbh a lot of this note stuff seems like bloat lol should be done OOB as you said *nod* altho OOB could compromise anonymity I said it _can_ be OOB, not that it has to Meaning it wouldn't have to be a mandatory part of a payment struct ah great that would be useful cos the DAO doesn't need notes maybe we need a "DataStorage" call, and the note for money::transfer() could be there (i will make verifiable encryption note for the DAO and it will use a different algo to the money one) ACK @ DAO note : @lunar-mining pushed 8 commits to greylists: f1e2546ea7: lilith: remove load and save host functionality (made redundant by greylist upgrade) : @lunar-mining pushed 8 commits to greylists: 9eaf2f14cd: net: read hostlist path from Settings. Define a default setting and allow overriding in config : @lunar-mining pushed 8 commits to greylists: 21fc1ad456: net: create greylist_refinery_interval in net::Settings and update TODOs : @lunar-mining pushed 8 commits to greylists: 5abe54b846: net: remove unwrap()'s and cleanup : @lunar-mining pushed 8 commits to greylists: e14395ebfa: net: add anchor_connection_count and white_connect_percent to Settings and cleanup : @lunar-mining pushed 8 commits to greylists: 498ad73438: net: remove connection from anchorlist when it disconnects and cleanup.... : @lunar-mining pushed 8 commits to greylists: d2bd631243: net: add hostlist documentation dasman: as this is just a temporary branch i have avoided adding the codeberg tracking. will shortly be adding this all into a new branch and can intergrate codeberg then. : @lunar-mining pushed 1 commit to master: 42edf060d2: doc: update TODOs on arch/p2p-network.md so monero has a thing called IDLE HANDSHAKE where it pings idle connections and refines the hostlists if they're active or inactive freem said this seems redundant as why would you try to ping an existing connection maybe i've misunderstood something, will verify against the monero codebase but other than that the greylist update is done (TM) and working fine locally bbl meh so i can only use NULLIFIER_K as EcFixedPointBase, but I cannot use it as an EcFixedPoint for normal ec_mul? aha i can do ec_mul_base(ONE, NULLIFIER_K) to convert it to a normal ec point lolol such an ugly hack but whatevs wtf there is no ec_mul(scalar, point) where point is in the circuit, there's only ec_mul_var_base(base, point) !list hello all, first time here! greets deki, welcome [contact."narodnik"] contact_pubkey = "Didn8p4snHpq99dNjLixEM3QJC3vsddpcjjyaKDuq53d" : @Dastan-glitch pushed 1 commit to master: 7d80e22ba8: dao: auth_xfer add verifiable encryption for all coins produced by money::transfer() feel free to add my key ^ and send me yours ty, how do I get my key? is it the command contact."username" ? deki: welcome! read this: https://darkrenaissance.github.io/darkfi/misc/ircd/private_message.html on it, ty okay got it: [contact."deki"] contact_pubkey = "5oyX9YVuLbi1SGyiAt9yGzR7nYqaRHG241qcjE9z5rzm" Hey my network went down, will have someone reset it tomorrow likely terry: I was thinking a bit regarding your question of hashing the leaf position terry: What if we introduce a zkas opcode that maps types to other types using Value::map() ? That way we could probably do it within the circuit, no? For example if leaf_pos is on the heap, and then we do leaf_pos_fp = map(Fp, leaf_pos), it should work? Although I'm not fully sure, it might need a special gate for this I'm off to bed, thanks for having me all gn brawndo: i don't think that map guarantees they are the same value in the circuit also i think the way the zk vm does lazy witnessing of scalar values might be vulnerable The Scalar witnessing is fine, confirmed by halo2 devs I asked about the value mapping, will keep you posted aha thanks a lot np gn cya hey is anyone up? wanted to ask: is it necessary to do the 'compiling and running a node' part? hi deki wdym, necessary for what I mean, as part of this whole process in running ircd and contributing to the development side? well, we coordinate all dev work over ircd and running ircd means compiling and running an ircd node you can also deploy it on a server/ raspberry pi and just run a client like weechat locally some of us have it running on android too ah right, thanks for the info. So you have it on an Android phone? yes using termux nice, I don't have an Android phone, might look into getting one you can compile it on e.g. linux and then copy the binary to your phone and run it using termux so I basically need to come to the dev meeting and put my hand up there to contribute? someone wrote this guide for an alternative method https://mirror.xyz/0xbe62F12D86B058566E2438fA9f1c4f800f30F68E/kMAfnA4Smkb0xg8904j8rhkkobaZ6UtK2kiGgCtUxK8 https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html opps https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html thanks, it's 1am for me but I can make it ah damn about ircd, we also have the telegram mirror, it's one way tho !list yeah I'm in the telegram group chat ah wait, is this something else though, the telegram mirror? I've seen that phrase elsewhere there's a bridge at t.me/darkfi_darkirc so messages in #dev and other channels are mirrored there great, ty np last question: do I need to initialize a wallet for dev related work? not rn, at some point would be helpful for testing we are working on deploying a new testnet so rebuilding rn okay thanks, will go through the codebase these coming days then nice :D :) this is the android guide: https://darkrenaissance.github.io/darkfi/misc/ircd/ircd.html#installation-android the public logs are here: https://agorism.dev/log/ are there plans to have a similar application for iPhone? Or is it too restrictive? I might get a second hand android just to play around with this, would love to get something like the Pine Phone but it doesn't seem to be a proper replacement for an every day phone iphone is restrictive, but should work on there check this out https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#areas-of-work thanks, was going through it earlier actually. Do you put up your hand for what you want to work on at the meeting? *hand up you can if you want or just start working on tasks doacracy : @Dastan-glitch pushed 1 commit to master: fa3b0ce573: contrib page: mark completed DAO tasks !list dasman: where is the meetbot, and why aren't links being displayed? also the twitter bot can go back up now okay nice, I have one more week off then I go back to work so will try to contribute where i can nice, lmk anything you need ty you saw my DM? on here? No I didn't, was gonna ask about that because I sent you one but not sure if it came through i added your key did you add mine to your config? yeah I did, let me double check. This is mine again just in case: 5oyX9YVuLbi1SGyiAt9yGzR7nYqaRHG241qcjE9z5rzm do you have your private key set? [private_key."XXXX"] yep, I followed this: https://darkrenaissance.github.io/darkfi/misc/ircd/private_message.html double checking now ok i'm restarting just in case, brb b sent you a DM nothing, should it come up on the side with your name? I'm checking my ircd_config.toml yes did you restart? (the daemon) nope, will try that now before that, do I need to remove the single # infront of private_key and contact_pubkey ? why is there a # in front of them? that's a comment you generate your keypair, add it to the config toml, then you also add mine gah I thought so...that's how my .toml file has it, I'll fix it now brb ok back : @Dastan-glitch pushed 1 commit to master: d6d45a4c0d: book: edit contrib guide freem: brawndo is managing the meetbot : @Dastan-glitch pushed 1 commit to master: b0899f8f4b: book/contrib: expand tutorial idea for links I migrated all bots to darkirc (to test it more) ok Twitter changed their api, and I had some changes on a fork, will recheck with mainstream and see what happened gm gm, or gn from australia g'day m'ate g'day indeed : @lunar-mining pushed 72 commits to net_hostlist: 7d80e22ba8: dao: auth_xfer add verifiable encryption for all coins produced by money::transfer() : @lunar-mining pushed 72 commits to net_hostlist: fa3b0ce573: contrib page: mark completed DAO tasks : @lunar-mining pushed 72 commits to net_hostlist: d6d45a4c0d: book: edit contrib guide : @lunar-mining pushed 72 commits to net_hostlist: b0899f8f4b: book/contrib: expand tutorial idea : @lunar-mining pushed 72 commits to net_hostlist: 090e8fddfd: hosts: add probe_node() method : @lunar-mining pushed 72 commits to net_hostlist: 80d6eae22e: hosts: create methods to store hosts in greylist after version exchange and periodically probe_nodes, whitelisting them if responsive : @lunar-mining pushed 72 commits to net_hostlist: 82da1ef2bb: hosts: if lists reach max size, remove the oldest entry from the list. : @lunar-mining pushed 72 commits to net_hostlist: b13ecb2811: hosts: create store_greylist() and store_whitelist() methods and tests : @lunar-mining pushed 72 commits to net_hostlist: a74557131b: outbound_session: create run2() method that changes run() behavior to new whitelist protocol.... : @lunar-mining pushed 72 commits to net_hostlist: f3b71f4fdc: net: call refresh_greylist() inside outbound_session::run()... : @lunar-mining pushed 72 commits to net_hostlist: d7d80b6f11: net: implement a new ProtocolAddr that sends addrs from the whitelist and receives to greylist... : @lunar-mining pushed 72 commits to net_hostlist: 00fdaaa0ea: hosts: reimplement test_greylist_store() : @lunar-mining pushed 72 commits to net_hostlist: c074282301: net: remove channel from the whitelist and add to the greylist if we fail to establish a connection. : @lunar-mining pushed 72 commits to net_hostlist: efe3ca7214: net: move whitelist_fetch_address_with_lock() to hosts, and change whitelist_downgrade() function call to take an url, not an (addr, u64) : @lunar-mining pushed 72 commits to net_hostlist: 7952b8ad41: lilith: store last_seen in host list. also change outbound_session to run new protocol : @lunar-mining pushed 72 commits to net_hostlist: 9a09e8c6cd: net: remove HostPtr from ProtocolVersion and update probe_node() : @lunar-mining pushed 72 commits to net_hostlist: 03ce1324bd: net: ProtocolSeed stores addrs on the greylist, and broadcasts own address with last_seen.... : @lunar-mining pushed 72 commits to net_hostlist: 2c01db5270: net: migrate outbound sessions over to new protocol. also replace lilith periodic_purge with periodic_cleanse.... : @lunar-mining pushed 72 commits to net_hostlist: a6c74eda87: net: migrate to new AddrMessage format : @lunar-mining pushed 72 commits to net_hostlist: 053cb71a52: net: move refresh_greylists() out from hosts and implement GreylistRefinery struct/ process in outbound session... : @lunar-mining pushed 72 commits to net_hostlist: 4f0c4cdc0a: net/lilith: move refresh_whitelist() process out of hosts and back into lilith. : @lunar-mining pushed 72 commits to net_hostlist: a19e20e006: net: cleanup : @lunar-mining pushed 72 commits to net_hostlist: 9f25cb4f10: net/ settings: add "advertise" to settings (default value = true) : @lunar-mining pushed 72 commits to net_hostlist: a7b4f60af4: net: implement ping_node() in OutboundSession and ping self before sending own address in ProtocolAddr, ProtocolSeed... : @lunar-mining pushed 72 commits to net_hostlist: 519353ae42: hosts: fix minor typo : @lunar-mining pushed 72 commits to net_hostlist: cc602048d6: net: standardize format + fix logic error on protocol_seed, protocol_address self_my_addrs() : @lunar-mining pushed 72 commits to net_hostlist: 175f6e78a1: net: commit working test : @lunar-mining pushed 72 commits to net_hostlist: c0a47457f8: net: BUGFIX: stop duplicate entries in greylist... : @lunar-mining pushed 72 commits to net_hostlist: de743a03b6: lilith: remove all peerlist filtering logic : @lunar-mining pushed 72 commits to net_hostlist: d4541d4315: net: fix typo in protocol/mod.rs documentation : @lunar-mining pushed 72 commits to net_hostlist: c2ba96dc6f: net: move GreylistRefinery and ping_node() to new hosts submodule. rename hosts.rs to hosts/store.rs. update host imports and ping_node usage : @lunar-mining pushed 72 commits to net_hostlist: 7cf5a1e081: net: delete hosts.rs : @lunar-mining pushed 72 commits to net_hostlist: 066d3dc9c5: net: fix warnings and cargo fmt : @lunar-mining pushed 72 commits to net_hostlist: 748c659f93: lilith: fix warnings : @lunar-mining pushed 72 commits to net_hostlist: de2fb840bf: net: invoke GreylistRefinery in p2p.rs and cleanup : @lunar-mining pushed 72 commits to net_hostlist: 0639e9bdf7: net: working greylist protocol... : @lunar-mining pushed 72 commits to net_hostlist: 2a2d516ac4: net: if a greylist peer is non responsive, remove it from the greylist : @lunar-mining pushed 72 commits to net_hostlist: ae5b4d0a69: net/store: reimplement test_greylist_store() : @lunar-mining pushed 72 commits to net_hostlist: c4ebcb3d45: net: reimplement address filtering on greylist_store().... : @lunar-mining pushed 72 commits to net_hostlist: 4adc0585c0: net: change try_read() and try_write() to read() and write() and cleanup warnings... : @lunar-mining pushed 72 commits to net_hostlist: a61a08c020: net: remove whitelist_store_or_update call from OutboundSession... : @lunar-mining pushed 72 commits to net_hostlist: 560b332e37: net: create perform_local_handshake which does a version exchange without adding channel to the p2p store, and use in ping_node : @lunar-mining pushed 72 commits to net_hostlist: 065f254661: lilith: comment out broken load_hosts code and add FIXME note : @lunar-mining pushed 72 commits to net_hostlist: 3725de07ec: net: and anchorlist and minimal utilities. also clarify hosts specific TODOs. : @lunar-mining pushed 72 commits to net_hostlist: ebe8eb1626: net: check whether host is in the peerlist before adding to greylist. also make additional anchorlist utils.... : @lunar-mining pushed 72 commits to net_hostlist: 03ae65956a: net: add peer to the anchorlist with an updated last_seen when we call p2p.store() on a connected channel : @lunar-mining pushed 72 commits to net_hostlist: b5bf749fe9: net: replace outbound connection loop with monero grey/white/anchor connection_maker()... : @lunar-mining pushed 72 commits to net_hostlist: c850f629b8: net: cleanup connect loop code reuse by implement connect_slot() method. also prevent infinite loop by doing peer discovery when the hostlist is empty.... : @lunar-mining pushed 72 commits to net_hostlist: 6a39e926f1: net: prevent inbound session channels from being stored in the anchorlist : @lunar-mining pushed 72 commits to net_hostlist: 0096f778c6: net: improve outbound_session connection loop logic. : @lunar-mining pushed 72 commits to net_hostlist: 03e6e99e90: net: move host selection logic back into hosts/store to avoid insane nesting in outbound session loop : @lunar-mining pushed 72 commits to net_hostlist: 5be6a07c61: net: add save_hosts() and load_hosts() methods and invoke on greylist refinery start and stop : @lunar-mining pushed 72 commits to net_hostlist: b456d8f5ec: lilith: remove load and save host functionality (made redundant by greylist upgrade) : @lunar-mining pushed 72 commits to net_hostlist: d15cc3b2bd: net: read hostlist path from Settings. Define a default setting and allow overriding in config : @lunar-mining pushed 72 commits to net_hostlist: 995ff6f6c2: lilith: add hostlist path to NetInfo and default config : @lunar-mining pushed 72 commits to net_hostlist: 6e8671d5b0: net: create greylist_refinery_interval in net::Settings and update TODOs : @lunar-mining pushed 72 commits to net_hostlist: ca4d523dd3: net: remove unwrap()'s and cleanup : @lunar-mining pushed 72 commits to net_hostlist: a555f2e744: net: add anchor_connection_count and white_connect_percent to Settings and cleanup : @lunar-mining pushed 72 commits to net_hostlist: 51b4263a93: net: remove connection from anchorlist when it disconnects and cleanup.... : @lunar-mining pushed 72 commits to net_hostlist: 873cd35e0e: net: add hostlist documentation woah! It's snowing commits :D if you're not connected to ircd and someone tries to send you a dm, will you receive it when you 'log on'? forgot to change my name will it stay in the buffer? Or do you have to be connected? hello! hey ziggurat nice name hey a used name here, have you seen this name before? deki no I haven't seen it used here before, then again I've only been active here for a few days are you a new user? i have been here before cool hey ziggurat, greets root: no we have a new version called darkirc which does have that feature ok thanks gm Didn't get my main ircd reconnected yet test test back gm Regarding having leaf_pos in the coin serial, it's likely not a good idea or a working one. I realised that you don't know your minted coin's leaf_pos in advance So this is not something that is deterministic : @lunar-mining pushed 12 commits to master: f8dc600fd9: dchat: renamed dchat to dchatd and add placeholder dchat-cli : @lunar-mining pushed 12 commits to master: 45a732cf09: doc: update dchat tutorial chapter 1 : @lunar-mining pushed 12 commits to master: 1eab3398c5: doc: add dchat tutorial to SUMMARY : @lunar-mining pushed 12 commits to master: 6de6815332: Cargo.toml: change dchatd directory to example/dchat/dchatd : @lunar-mining pushed 12 commits to master: 6067b44961: doc: fix dchat tutorial chapter2 : @lunar-mining pushed 12 commits to master: ea28c73fb2: doc: create dchat tutorial chapter 4 and specify TODOs : @lunar-mining pushed 12 commits to master: c03f162c78: doc: finalize dchat tutorial and add TODOs : @lunar-mining pushed 12 commits to master: 4f5d7ddb98: doc: update SUMMARY with new dchat tutorial flow : @lunar-mining pushed 12 commits to master: bbf2a67531: dchat: add anchors/ fix ports/ uncomment daemon : @lunar-mining pushed 12 commits to master: d2fad919d1: Cargo.lock: update dchat dependencies : @lunar-mining pushed 12 commits to master: e5b2c9c767: dchat: remove deleted files and add new ones : @lunar-mining pushed 12 commits to master: ac542ec675: doc/ dchat: add TODO brawndo, it's not in the coin or serial it's in the nullifier nullifier = hash(serial, secret, coin, coin_pos) (you need some unique info associated to only that coin, since duplicate coins can be created) we could also just ban non-unique coins, and put the coin in the nullifier rather every coin *must* be unique We are already enforcing that coin uniqueness There's a set of all seen coins ah so then we don't need the leaf_pos ah ok btw regarding getting the u32, we either have to modify the merkle gadget (less proper way) yeah allowing duplicate coins seems like a bug anyway since the serial and coin_blind should be random Or we can use the decomposed bits in some way to create an Fp (more proper way) ++ yeah i just couldn't see how to get those decomposed bits since unfortunately it's done inside the merkle gadget but anyway i guess it's fine now we have a simple fix that's guaranteed to work (we should add a comment above nullifier derivation in burn.zk that coins are guaranteed to be unique) Sounds good should i add it then? Yes please gr8 ++ :) ;) <3 in hindsight, seems obvious the nullifier should commit to the coin being spent if coins could contain some verification code (like btc script), the nullifier would produce the values satisfying that script then we could anonymize contract calls Yeah So coins are always unique But serials themself might not be I think therefore instead of N=Poseidon(secret, serial), it should be N=Poseidon(secret, coin) In Burn.zk coin is derived using the secret so I think we'd be good on that front nice i already deleted serial, was considering whether we keep secret but i think we should keep it What secret? Coin C=Poseidon(pubkey,value,token,serial,spend_hook,user_data) Can't be less attrs pubkey = secret * NULLIFIER_K nullifier = poseidon(secret, C) Does that make sense? yeah but in ZK we check we own the secret key for that pubkey so it's redundant but it's fine to keep it imho You should because you can otherwise keep hashing known coins And link them to nullifiers If N=Hash(C) That's bad yeah true true we need one extra thing in there yep secret key is good. I'll add a note about that ++ btw we're missing the ability to do ec_mul_scalar(s, point), where point is not constant That's not circuit-native so quite complex to do There were some implementations for that, I need to look it up halo2wrong or something like that ah yes https://github.com/privacy-scaling-explorations/halo2wrong But doesn't ec_mul() do the trick? ah no that's fixed point Yeah I'm not sure for diffie hellman, i had to use mod_r_p : @Dastan-glitch pushed 1 commit to master: 30eb6bba19: money: switch to new nullifier scheme N = hash(secret, coin) I remember learning about Diffie-Hellman key exchange when I did my cryptography elective (while ago now) nice to see this stuff being implemented beyond theory it's a little trick to derive a shared secret, used mainly for asymmetric encryption interesting, do you go into ElGamal encryption? Going through my old notes and it says it turns DH protocol into a cryptosystem yes we use el gamal for notes, see darkfi/src/sdk/src/crypto/diffie_hellman.rs awesome, thanks np : @Dastan-glitch pushed 2 commits to master: 3e2b475139: update Cargo.lock gm ser gm test test back gm all, 6pm here in aus gm do you guys use copilot to code in rust? Is it accurate? never used it, i'm doubtful tho mayb im wrong yeah fair, it can be hit and miss from my experience with other stuff same never used it Supposedly ppl use it to tab-complete test units and stuff But I found it not that smart, although I only used it for a tiny bit ACTION going afk Happy 2024! HNY! see you, HNY <3 https://www.youtube.com/watch?v=z-zkSHgGiXg Title: Happy New Year! - Snowpiercer - YouTube XD ok afk now until tmrw happy tidings anons! good tidings i am working on a commit lol anyway NYE kinda happened already (solstice) HNY! freem: dm'ed on tg, looking to have a quick chat, could dm you here if you prefer i just don't have ircd on my phone happy new year everyone! happy new year! already 2024 here Happy new year! two and half an hour in 2024 and still nothing special greets test test back yo : @lunar-mining pushed 8 commits to net_hostlist: 4d4392f9e8: net: add test module to mod.rs : @lunar-mining pushed 8 commits to net_hostlist: 07c2d667e1: session: remove redundant anchorlist write... : @lunar-mining pushed 8 commits to net_hostlist: ca885a43ee: store: improve error naming... : @lunar-mining pushed 8 commits to net_hostlist: 2696290aad: store: fix logic on is_empty_hostlist()... : @lunar-mining pushed 8 commits to net_hostlist: 1578138e8f: outbound_session: move fetch_address logic into new function : @lunar-mining pushed 8 commits to net_hostlist: 18479be298: test: add seed node to net/test.rs : @lunar-mining pushed 8 commits to net_hostlist: 426efdf90b: chore: cargo fmt : @lunar-mining pushed 8 commits to net_hostlist: 27d1b3aa03: hosts: fix logic on anchorlist_fetch_with_schemes... !list No topics gm gm idk if we have a meeting today fren hi reborn hey hi hi, nothing to dig today what do you usually do during the meetings? This is my first one bye welcome deki ty you can add topic by !topic command, and we discuss topics every monday 4:00 cet bye dasman it's weekly dev meeting. ACTION waves ah okay, I'm mainly going through the 'areas of work' and 'git grep -E 'TODO|FIXME' list to see where I can contribute for nwo deki we usually set topics and discuss whatever ppl want, like potential changes or questions about something etc. today is new years day tho so ppl mostly chillin ok nice deki: also check https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: The DarkFi Book ty hey test test hello? first time trying out the ircd weechat anyone online? hey, yes I am oh sweet it's working this is pretty cool! haha yeah it's fun so is this a completely peer 2 peer chat using the darkfi network? yes, that's my understanding of it I only got onto it last week but it's been out since september I think okay cool yeah it has been around a while not sure how long though. but finally took the time to try it out. you can put it on your android phone too if you have one glad to have you here btw! thanks, yeah glad to be here! test test back test2 hey was d/c gm welcome anon b dwdn. dwdn.vdddjjjjjjjjjuu:jjjq! :q! jjjjjjjjjjjjjjjjjjjj reset clear q! :q! q! :q! q! reset 2 oops sorry about that my tty term was fucked gm hey : @lunar-mining pushed 6 commits to net_hostlist: 5f00598c12: outbound_session: remove peer from anchor or whitelist when try_connect fails : @lunar-mining pushed 6 commits to net_hostlist: ad3675eb3c: store: fix death loop... : @lunar-mining pushed 6 commits to net_hostlist: 79e9039b9b: store: create test_fetch_anchorlist() unit test : @lunar-mining pushed 6 commits to net_hostlist: 0ac12ff19d: store: add test_fetch_address() unit test : @lunar-mining pushed 6 commits to net_hostlist: e472003b6d: outbound_session: fetch_address() logic bug fix... : @lunar-mining pushed 6 commits to net_hostlist: 4cde069c53: store: document and cleanup tes test test back first time trying irc, anyone online today? user: hey hey I'm online yay!! haha yay indeed you can set up private messaging by sharing your public key (doesn't have to be on here, can be a DM on telegram), and saving someone elses in the .toml file explanation here: https://darkrenaissance.github.io/darkfi/misc/ircd/private_message.html Title: The DarkFi Book Thank you deki! :) :) I've been going through the TODO and FIXME parts in the codebase, and figured I'd try to fix the task "Don't hardcode to 8 decimals" in bin/drk/src/main.rs but I noticed the function that's being used here: encode_base10(*balance, 8) is also being used elsewhere, for example in wallet_dao.rs line 84, it's being set to 8 decimals there too my question is: should this be fixed in *all* instances where encode_base10 has a fixed decimal value? Or *only* where it's specified 'FIXME' in main.rs ? I have to step out, will be back in a few hours but will receive messages on telegram i think it means introduce a constant rather than using 8 !list No topics !topic cleanup contract sections naming Added topic: cleanup contract sections naming (by antikythera) !topic db_contains_key shouldnt have Update perm Added topic: db_contains_key shouldnt have Update perm (by antikythera) gm @antikythera so it's okay to stay as 8? So long as it's defined as a constant? I thought we wouldn't want to hardcode any value because we're dealing with different precisions? Or will balance always be 8? 8 decimals is the max precision possible so yes it just should be a constant (imho) okay great, thanks I'll go ahead and fix that ++ deki: are you around? upgrayedd: yes I'm here, just woke up deki: check the review on the pr okay will have a look, thanks upgrayedd: can't find where the review is? I'm at my PR https://github.com/darkrenaissance/darkfi/pull/248 but can't see any review/comments Title: fixing hardcoded value for decimal places to constant by deki-zedd · Pull Request #248 · darkrenaissance/darkfi · GitHub hey upgrayedd: there's no comments/review antikythera: lol yeah forgot to press submit, its up now upgrayedd: thanks, I got it. About to have dinner will have a look after narodnik: check dm please : @Dastan-glitch pushed 1 commit to master: cc2de1aca1: spec2: concepts page looks like a DAO proposal is only about where to send a certain amount of tokens? Nothing else can be proposed? kanon: the current DAO impl just supports voting to spend from the treasury (following a proposal) we can easily support voting for non-financial proposals like "should we make this software upgrade" for example but the current impl is focused on executing transactions from a shared treasury when a threshold of votes has been reached (and other criteria met, set by the DAO) I see. Thx reborn. Is "we can easily support voting for non-financial proposals..." on the roadmap, and if yes, when could it be expected? Or rather not possible to give any information? kanon: DAO proposal is generic, it can make any contract call it was changed over christmas, the task is completed ah my bad are there any docs on this? antikythera: ^ I see this stuff https://darkrenaissance.github.io/darkfi/spec/dao/dao_propose.html but it's not easy to understand, to say the least Title: The DarkFi Book : @Dastan-glitch pushed 1 commit to master: fca68c54af: fix & update darkirc test script what IDE do people here use for Rust? I'm guessing VS Code is the main one, but wondering if anyone uses RustRover? It's from JetBrains and I like it so far, I've used PyCharm from them and it's a great to use, also available for Linux distros hello kanon: not yet but working on it deki: just a text editor and cargo. IDEs are bloat gm kanon: did you see this btw? https://darkrenaissance.github.io/darkfi/arch/dao.html Title: The DarkFi Book it's old now since the scheme changed last week, but this is the previous version which was non-generic deki: wdym "main one"? : @Dastan-glitch pushed 1 commit to master: dd0e2dabee: spec2: add section on crypto schemes antikythera: i think i figured out where the deadlock is happening in outbound session, we spin up e.g. 8 slots. each slot tries to connect to an address, or does peer discovery i think deadlock happens when e.g. slot 1 and slot 2 simulaenously select the same address from the hostlist, and simultaenously run locking checks when trying to connect to it that's why they first add it to the pending list the pending list should avoid this issue ideally upgrayedd: I meant a lot of SW developers I know use VS Code, so I assumed it would be the main choice here? Simple text editor with cargo works too to get good at code, you just need to read, write and think about a lot of code deki: lol I was making a joke on your assumption that we use IDEs... they are all bloat you don't need them the simpler your setup, the better... i use linux terminal + nvim haha fair, it's actually a good way to see them: bloat never considered that recently i added this to my nvim config: map :!touch /tmp/f5 then i can run this in my terminal: echo /tmp/f5 | entr -nrs "cargo run --release" deki: apart from that, they make devs lazy to dig deeper and learn whats really going on under the hood whenever i press F5, it will run the cargo command ++ simpler, strong fundamentals focus (old school way) antikythera: oh thats nice keymap good points, I'm gonna get back into vim I think. I used it a lot at my previous role, but have only used IDEs since : @Dastan-glitch pushed 1 commit to master: 6af0437e21: spec2: sections on concepts, notation, pallas/vesta !list Topics: 1. cleanup contract sections naming (by antikythera) 2. db_contains_key shouldnt have Update perm (by antikythera) gm greets gm : @lunar-mining pushed 1 commit to net_hostlist: fcf5a87a28: net: fix deadlock (partial fix)... 99.9% of the time it works 100% of the tiem bbl anyone know why in my log thing where it usually says Msg: PING localhost, I also sometimes get 'Received Prvmsg { id: *numbers*, nickname: *random letters*, target: ..., message: ...} but I don't receive any actual message in the chat? is someone actually trying to message me? I did share my public key in here initially, but I also need the other person's public key to properly receive msgs? deki: these are encrypted messages for which you don't have the key ah ok, thanks upgrayyed: finalising the PR I have atm, I've added BALANCE_BASE10_DECIMALS to wallet.rs, I've also replaced all hardcoded instances in drk where encode/decode is used. My question is: do I need to now add the use statement in every single file where that constant is used? Such as: use main::BALANCE_BASE10_DECIMALS; trying to understand how Rust accesses shared declarations across modules yeah you have to declare it in order to use it doesn't compiler gives you that error? havent' tried compiling, because I didn't implement that change, wanted to ask first to confirm most of info like that are given by the compiler, no need to confirm them with anyone thanks okay compiled returned no errors, will push these new changes soon and await your review i'm gonna have to go to bed too, it's late here deki: I'm pretty sure this code doesn't compile how did you check?: I ran cargo build in the terminal in which folder? you should use make drk in repo root the parent folder darkfi ah ok, so run 'make drk' instead makefile is there for a reason use it cargo build shouldn't work either, unless you only have nightly toolchain as default yes I have nightly toolchain as default also when importing modules, the generat formating is: std, other crates, own crate/super so you should have: use super::{wallet::BALANCE_BASE10_DECIMALS, Drk}; as wallet is a submodule of super so be explicit on your declaration ah right, thanks for the feedback and don't forget to use make fmt and make clippy fmt for code style formatting and clippy for linting in general you should only use make clippy for compiling, to also check linting is make fmt meant to return something? I don't get any feedback, just goes into next line no, but you can see if files where changed as they will become unstaged okay, will try that all soon running make and it nearly finishes then throws one error: failed to run custom build command for 'randomx v1.1.11 (link to github)' Failed to generateMakefile with CMake. I'm guessing I need the RandomX dependency installed? actually I was missing cmake, I thought I had it nm okay I ran make clippy and it compiled with no errors, ok to push again? its your pr don't ask to push lol fair lol ah wait think I forgot something lol not there that was the only correct one in rest crate files and I'm still pretty sure it doesn't compile I forgot to put use super::{wallet::BALANCE_BASE10_DECIMALS, DRK} everywher else i have the old one aside from make clippy, should I run anything else? make clippy passes? no error shown? no errors when I ran it, I did have some dependency issue before but fixed it it doesn't tell you that the is a random ",d" here} https://github.com/darkrenaissance/darkfi/pull/248/commits/b823e6d024a471d5142db5cbded4d428e874d87f#diff-0d7c313eebe3dffe1a158cf752858f290695722d3de75cdf8a2be4dab6a33430R30 bin/drk/src/wallet.rs:30 weird no it never picked that up, I'm still running this through RustRover btw which says is in 'preview' lol glhf then I won't bother debugging an IDE just don't push erroneous code I'll change to VS code, haven't made the ransition to vim only do whatever, my statement still applies :D fair lol b gm gm first day back at work and spent most of it going through rust documentation :) came across the command 'cargo check', do people here use this here for the darkfi project? gm deki: I recommend running `make clippy` for that yeah I started using that yesterday, was curious if people added cargo check into their workflow brawndo: is make clippy the only build/compile related command you should run then, when make any code changes to the project? You should also make sure that any relevant tests pass `make test` will run the full test suite thanks, do I need any other toolchains? I only have the nightly one, but I thought I needed another one? Can't find it in the docs That should be fine https://github.com/darkrenaissance/darkfi#build Title: GitHub - darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. ty gm gm !list Topics: 1. cleanup contract sections naming (by antikythera) 2. db_contains_key shouldnt have Update perm (by antikythera) https://github.com/darkrenaissance/darkfi/actions/workflows/book.yml Title: Generate DarkFi Book · Workflow runs · darkrenaissance/darkfi · GitHub all the workflows are passing fine, but the book on the website is not updating for some reason i've been trying to fix this the last couple of hours but have no idea what to look for... hm yeah the branch seems to not be updating Let me check if it works on a manual build it does I meant the push hmm it does reiserfs: How about instead of figuring it out, we rehost it under dark.fi/book/ ? I can set that up relatively easily sure thing, that would be a much nicer URL ACK Arti is really becoming more trouble than it's worth !topic net upgrade status Added topic: net upgrade status (by reborn) you were quite optimistic before about the arti timeline also the project has decent activity It's just a lot of schizo code lol weird the online book just updated like after 2 days I pushed a manual commit sh ah Setting up our own anyway sup hey gm cc hihi !start Meeting started Topics: 1. cleanup contract sections naming (by antikythera) 2. db_contains_key shouldnt have Update perm (by antikythera) 3. net upgrade status (by reborn) Current topic: cleanup contract sections naming (by antikythera) gm we use the term process()/entrypoint inconsistently same for update/apply can we pick some consistent naming? Sec, github is a bit slow We have that define_contract!() macro yep Maybe instead of mapping functions, the main functions could just be called "init, metadata, exec, apply" ? Or rather s,init,deploy, for example, although exec doesn't exec anything we use process/exec/entrypoint, maybe process is best? process/update (or apply if you prefer) i'm not too concerned with naming tbh, just it's consistent I'd be fine with "deploy, metadata, process, update" cool ty !next Elapsed time: 4.0 min Current topic: db_contains_key shouldnt have Update perm (by antikythera) can i remove this perm from this call? seems wrong to do a read op inside update I think there was an edge case where it was used But yeah Agree that it should be removed ah would be nice to know ok thanks !next Elapsed time: 0.6 min Current topic: net upgrade status (by reborn) Maybe if you disable it you can find where it was used? hey so should be ready for code review on this this week yeah will run the entire test suite reborn: Great news :) have been bug fixing, solved most of them imo, just running thru everything with a tine toothed comb I just also pushed a TLS port because the smol devs deprecated async-rustls in favor of futures-rustls That's great ah cool that's it from me !next Elapsed time: 1.9 min No further topics Any general updates? I've been away a bit for holidays, now back on track for the final push i started slowly working on the spec How's the spec going? Need any help? it's easy, just need to work on it for 2 weeks and will finish https://darkrenaissance.github.io/darkfi/spec2/crypto-schemes.html Title: The DarkFi Book Excellent I was looking for ways to make event graph sync get faster, but I don't think it can go any faster, tho I noticed instead of topological sort, we can sort the graph using the layer number, idk how it will affect things but it's just something Do you feel it's slow right now? also to get it out of my system I started to do forward syncing and compare, but things look good now and we should start using darkirc more often dasman: Yeah as soon as we merge the p2p improvements we should be good to go Not slow per say, but I thought if it could go faster it'll be better ++ ah yeah the apps will need to be ported over following code review ok, maybe you and upgrayedd can fiddle with that reborn: I could do that as practice to familiarize myself with the changes :) sounds good right ** Oops ++ :D +^+ :) :D anything else? \°.°/ Cool. I'll be online now 99% of the time so feel free to ping for anything yes, ℕ₆₄ means nintendo 64 sweet antikythera: haha haha for help i see rust is needed ? i need to learn it ? yes dada My plan this week is to dig in a bit into the wasm host functions and see how to apply gas fees there aha great Since they aren't accounted in the normal gas stuff there's a lot of tasks like "make x stable" But I'll mainly go with bytes-written as cost do you have a strategy for gas pricing? ^ ok sounds as good as anything, disk access is slow I think we know the sizes of all buffers passed around host<->wasm So shouldn't be too difficult one thing is that sig verif is slow but not much disk activity Yeah although sled helps a bit since it's able to cache same for proofs i reckon sigs and proofs are slow yeah But we know their cost in advance we ideally need a harness to do benchmarking for all this stuff So we don't always have to verify them would be easier to gather metrics and make judgements We have a bit of that in the contract test harness sec ok so fees, deploying contracts, p2p stuff, ... what are the other big areas? https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/lib.rs#L497 Could be made more granular to have individual statistics for wasm, proofs, and sigs yeah One area is the cross-chain swaps That's here, although stagnant for a few weeks: https://github.com/darkrenaissance/darkfi/pull/246 Title: feat(swapd): begin atomic swap implementation by noot · Pull Request #246 · darkrenaissance/darkfi · GitHub yep Then there is (merge) mining Nothing else rn off the top of my head ok thanks thanks everyone tyty ty :) !end Elapsed time: 15.0 min Meeting ended thanks everyone ty biab dasman: don't forget right now we sync single events you can make it faster by introducing variable sync length(limited by a max constant) brawndo: only 177 gentoo packages to compile :D something else which is not clear to me is the docs talk about PoS but in several places now I saw PoW being mentioned. Are both being used, or one superseded the other, or how do I have to understand this/ something else which is not clear to me is the docs talk about PoS but in several places now I saw PoW being mentioned. Are both being used, or one superseded the other, or how do I have to understand this_ current testnet is PoS, but we are building a PoW one now we will launch w PoW That's pretty impressive. Will it be more bitcoin-style PoW or monero-style? Considering the project's philosophy I'd rather guess monero To allow anyone to mine? ay are there mirrors of the seminars in the wiki? links are dead good idea to run cargo clean as part of your workflow? deki: nah, only to reset everything if ever needed upgrayedd: thanks, btw still sorting out my build errors for my ticket. Going to work soon so will get back to it after glhf kanon: check https://dark.fi/insights/development-update-q423.html Title: Development Update Q423 miagi: oh shit they are down, not sure we have a backup gm test test back antikythera: 3b14e2f671a502f3ac267d53ed433fbcb0164392 These are different nullifiers to money nullifiers, right? yes its for voting ok just checking Remember that for hashing anything, you always want to have the same number of elements Because H(a,b) and H(a,b,c) can theoretically produce a collision So in a set of some hashed commitments, you want those hashes to always take the same number of elements That's what cryptographers mean when they say that a hash function is collision resistant for fixed input size aha thanks, makes sense will make sure of that Cool Also sometimes you'll end up with the same number of elements for different commitment types There you can for example add another element which serves as salt, for example the prefixes we use to derive a Token ID vs Contract ID H(1, a, b) and H(2, a, b) will then be different hashes Not always necessary, but sometimes good to make this difference if some context is similar and you want to avoid confusion I suppose it also helps with anonymity For example now in 3b14e2f671a502f3ac267d53ed433fbcb0164392 there won't be matching with money nullifiers because the inputs are different Without that you'd know "oh, this coin voted on a proposal and was now spent" yeah true, the reason I added that though was because nullifiers should be distinct per proposal idk what the conclusion was of the convo about locking coins into votes I think they don't have to be locked ok that's good, means no changes Locking them also inherently brings a bad attack Which is: Someone creates an "unwanted/bad" proposal, then everyone votes on it do make it not pass In the meantime the same proposal is created again, and how nobody can stop the second one It's funny how the base functionality of everything is really really simple And then the job is to add all these constraints around to protect it the locking would be fine since proposals fork the state tree before voting occurs the caveat is that if i unlock my tokens, i can still vote on already active proposals, just not any new ones It's not fine because you can make both at the same time then the proposal doesn't pass, it expires I mean Thinking about it, I don't understand the locking mechanism fully If the state is forked, there is nothing to lock anyway And I do think you should be able to vote on all proposals with your tokens Not be forced into picking your favorite proposal yeah i don't understand either upgrayedd: Does this look alright to you? https://github.com/darkrenaissance/darkfi/commit/2ad9183239842a502bf7df8afd3cea294ff3e419 Title: Server Error · GitHub brawndo: shouldn't the DeployParamsV1 exist in sdk? so both validator and deployor can use them without circular deps Maybe yeah, good idea I'll move it But otherwise? why is the new runtime needed? can't use the already initiated one? oh no different wasm bincode Yeah the parent is the deployoor contract which verifies the contract ID derivation and the wasm symbols Then the child runtime is running the new wasm's deploy() func so the parent have acces to it right? gotcha gotcha gotcha and since its all in the overlay, its alive only in there so consecutive txs can actually use it Yeah without it actually existing in the blockchain yeah lgtm Cool Need to review the actual runtime.deploy function now Then I'll try to make something for the harness noice gj https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/vm_runtime.rs#L412 There is this, but I think it's not relevant anymore? https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/vm_runtime.rs#L422 This should already be the overlay? I think BlockchainOverlay already has the contract stuff sec let me check https://github.com/darkrenaissance/darkfi/blob/master/src/blockchain/mod.rs#L421-L424 yeah so it should go over the overlay but you can verify/test it easily just spin a new empty db, create overlay over it and test the deploy tx against it, then try to access that deployed wasm from next tx in the overlay and in from the actual db second case should fail *nod* brawndo: btw we have the environment defining blockchain here: https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/vm_runtime.rs#L80-L81 so everything goes only thought the overlay already We made that change some time back, don Yeah 't really recall when XD Yeah there's WasmStoreOverlay implemented thanks past us :D :D yay stack overflows in wasm brawndo: just download more ram ^_^ oh my bad I was deserializing the wrong types Let's see if it works now ah no it's real ok I'm including a new build dependency The wasm bins have to be stripped to <1M : @parazyd pushed 1 commit to master: 849da0a521: contract: Strip built WASM binaries using wasm-strip from the wabt toolkit ooh bot's bacjk do you guys have any plans to use solidity for this project? Or will it not be part of ethereum? Out of scope Solidity is a bad language anyway how come? I started learning it a few months ago to get into auditing, seemed to be all the rage A friend wrote this: https://makemake.site/post/solidity-considered-harmful Title: Solidity considered harmful - makemake interesting, having said that I have seen a few comments on twitter saying that there's a shift towards Rust https://ewasm.readthedocs.io/en/mkdocs/ Title: Ethereum WebAssembly (ewasm) - Ethereum WebAssembly To me this is interesting lmao "In the end, Solidity is Java mated with C++ in such a way that the best genes of both failed to exert a phenotype." interesting, haven't heard of ewasm brawndo: lmao that you added the dep only for debian That's for the CI :p THE COMMUNITY can add the rest : @parazyd pushed 1 commit to master: 43030a9eea: chore: Enable some additional arti-client crate features... : @parazyd pushed 1 commit to master: 561318cf6f: runtime: Disable payload debug message on Deploy do you run 'cargo test' as part of your development? Or do you only do the fuzz testing described in the doc? `make test` ah right that one too have to admit I got a good laugh out of seeing 'deployooor' lol :D deki: I told you before, use/read the makefile : @aggstam pushed 1 commit to master: 4cc5cf6217: contrib/dependency_setup.sh: wabt dep dependency added for xbps : @aggstam pushed 1 commit to master: e5660ce75d: validator: check if proposal already exists when looking ofr its fork index... : @Dastan-glitch pushed 1 commit to master: 152e81984d: spec2: DAO model what's the correct way to convert a blake3 hash to pallas::Base? darkfi/src/contract/dao/src/model.rs:125 this is what we do currently contract/consensus/src/client/proposal_v1.rs:257 antikythera: ^ https://docs.rs/pasta_curves/latest/pasta_curves/struct.Fp.html#method.from_uniform_bytes Title: Fp in pasta_curves - Rust aha ok ty wait why is that taking [u8; 64] Fp is only 32 bytes long... https://crypto.stackexchange.com/questions/84327/how-does-montgomery-reduction-work Title: implementation - How does Montgomery reduction work? - Cryptography Stack Exchange i think from_repr() is the correct call, since internally this Fp uses montgomery form from_repr() will not do modulo and will fail if it's a non-canonical encoding https://docs.rs/pasta_curves/latest/pasta_curves/struct.Fp.html#method.from_uniform_bytes is the correct method Title: Fp in pasta_curves - Rust Fp can take a bit less than full 32 bytes I suppose that method takes 512 bytes so it can be generic It's not part of pasta_curves, but rather an inner crate trait ff crate >The length N of the byte array is chosen by the trait implementer such that the loss of uniformity in the mapping from byte arrays to field elements is cryptographically negligible. https://docs.rs/ff/0.13.0/ff/trait.FromUniformBytes.html#implementing-fromuniformbytes Title: FromUniformBytes in ff - Rust deki: So eWASM was the planned execution environment for the original ETH 2.0 system, which would have had multiple executable shards, some of which could have been EVM and some using eWASM, which was a variant of WASM with gas counting inserted and some points of non-determinism removed. From maybe 2016 until 2021 or later huge efforts were poured into it but that was all thrown away when it became apparent that it would not have been any better. Then executable shards got dropped anyway. Then it was decided to do The Merge instead, and to make the beacon chain executable, with future shards being data only. It was a huge distraction from actually making the EVM better (ie. EIP-615, then simple subroutines and now EOF. eWASM is dead. Other chains use WASM, but not eWASM. dark-john: thanks for the info upgrayedd: yes I know, was going through the Rust lang book and came across cargo test, been going through makefile too aha ty brawndo i think from_raw() and from_uniform_bytes() both work equally : @lunar-mining pushed 2 commits to net_hostlist: 96cad54d81: net: 99.9999% of the time it works 100% of the time... : @lunar-mining pushed 2 commits to net_hostlist: 736459aa51: chore: cleanup... commit title of 96cad54d81 is misleading... it actually works 100% of the time (sample size of 500) : @Dastan-glitch pushed 1 commit to master: f573585b72: spec2: vote nullifiers and finish dao model page has anyone got github 2fa working with a yubikey? or should i use codeberg to make a PR? nm re: ubikey, got sorted gm antikythera: Yes probably, though you have to be careful how you construct the from_raw array yeah i'm not sure of the encoding, but it doesn't matter since i'm mapping a hash to a pallas::Base value In any case I believe from_uniform_bytes is more correct usage of the API Could make a helper wrapper function even that's fine, i'll use it if you think it's better quick q: does it pad zeros on right or left? nw if you don't know, i'll look it up Who pads? i mean is it little or big endian? i guess little Yeah from_uniform_bytes takes little endian. So when you have a 32-byte hash you want it first: [fffff...00000] ++ ty np https://docs.rs/ff/0.13.0/ff/trait.FromUniformBytes.html Title: FromUniformBytes in ff - Rust See here what they wrote about the reasoning ahh excellent, makes a load of sense great So it's a cryptographic reason in essence from reading this page, i get the impression that 32 bits of 256 bits are more likely to be used when mapping a 32 byte array to pallas::Base whereas adding another 32 bytes, reduces the risk of those 32 bits https://en.wikipedia.org/wiki/Continuous_uniform_distribution Title: Continuous uniform distribution - Wikipedia the blake3 hash is uniform in the range [0, 2^256), which is mapped to [0, p] the values [p + 1, ..., 2^256 - 1] are mapped to [0, ..., 2^256 - p] (which is 32 bits) this makes the hashing to pallas::Base non-uniform, but here they are saying that having an extra 32 bytes adds more randomness to make it less of an issue "The length N of the byte array is chosen by the trait implementer such that the loss of uniformity in the mapping from byte arrays to field elements is cryptographically negligible." Should we then be duplicating the hash so we have 512 bits? Or pick another hash function? ^_^ well it would need to be random, so you could hash it again or use a bigger hash BLAKE2b is 512 bits thats probably why they use blake2 interesting lol upgrayedd: Is `pow_target` based in seconds in ValidatorConfig ? LFG https://github.com/darkrenaissance/darkfi/pull/249 Title: Net hostlist upgrade by lunar-mining · Pull Request #249 · darkrenaissance/darkfi · GitHub : @parazyd pushed 1 commit to master: ec1e9ff64e: contract/test-harness: Set fixed-difficulty=1 mining reborn: Nice! upgrayedd: Also, what consensus is the contract test-harness using? Could you please add a method where I can just generate a new block and give a holder as the mining reward recipient? With fixed-difficulty=1, any block should pass normal verification, nothing special should have to be done reborn: i just was skimming through and noticed in protocol_seed: if self.settings.advertise == false { minor thing thanks : @lunar-mining pushed 1 commit to net_hostlist: 2dbaf413a0: protocol_seed: fix bool syntax : @Dastan-glitch pushed 1 commit to master: 993587ada7: zk/debug: add Uint32 and MerklePath for export_witness_json() brawndo: yes in seconds(defined here: https://github.com/darkrenaissance/darkfi/blob/993587ada7ac2cfbef2c19349414237ec9586a34/src/validator/pow.rs#L82-L83) test-harness doesn't do any consensus, it uses validator add_transactions() to write to the state directly we already have a pow reward tx you can use in there reborn: I added a few comments on the PR, other than that LGTM :) upgrayedd: ah but how do I advance the block height? I wanted something like: for i in 1000 { generate_block(&holder) } check how we do unstake request and then unstake where we wave the time lock tldr: you just generate the height/slot you want and pass that to the timekeeper ok I'll have a better look at it, ty https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/consensus_unstake.rs#L132 https://github.com/darkrenaissance/darkfi/blob/master/src/contract/consensus/tests/stake_unstake.rs#L194-L200 : @Dastan-glitch pushed 1 commit to master: 9d82ce3a1c: dao::vote(): correct mistake in nullifier check those functions in those tests we even have trying to use coind befero timelocks checks showcashing wrong height/slot so you want something similar I didn't add consensus there are contracts should only care about current state and height(u64) s,are,as *nod* can i switch the DAO auth calls hashing to BLAKE2b-512? lmk if something ain't clear after you read those tests antikythera: Yeah sure, use https://docs.rs/blake2b_simd/latest/blake2b_simd/ Title: blake2b_simd - Rust ok gr8 : @parazyd pushed 1 commit to master: 6b238fdb9a: ci: Install wabt for book gen woah this is useful https://lib.rs/ Title: Lib.rs — home for Rust crates // Lib.rs never actually visited the homepage before Your favorite :P https://lib.rs/os/macos-apis Title: macOS and iOS APIs — list of Rust libraries/crates // Lib.rs https://lib.rs/crates/sled-overlay Title: sled-overlay — db interface for Rust // Lib.rs lmao :D nice brawndo: slocs seems a bit high does it count comments and/or tests? >Dependencies lol we literaly have single dependency, sled XD sled and its recursive deps oh oh yeah the actual one is 293? thats low XD #773 in Database interfaces it should be higher XD haha 770 we going up ahahahaha LOL Keep refreshing our 3 views moved it up it should go by usage not views, but maybe yeah brawndo: ACK : @Dastan-glitch pushed 1 commit to master: 4d87af64f4: dao: replace use of blake3 hash with blake2b. See code comments for explanation of the rationale ty the book is fixed now brawndo: here? what do you think is better: smol::future:or or futures::select? select allows for more than one task FuturesUnordered antikythera: thats in futures right? yep it's used in net. I saw brawndo use it, and thought it's better than futures::select because it's not a macro smol::future::or is not either, thats why I asked XD i think futures::select is used to get the first future to finish and grab the return value FuturesUnordered lets them all finish select! polls all features and finishes once any of them finish I don't want them all to finish yep true, well sry for trolling ;) I want to grab the first one finishing no worries my question is more since I don't know weither this will look clean: smol::future::or({smol::future::or(task_0(), task_1())}, task_2()) compared to futures::select!{task_0(), task_1(), task_2()} the macro is better well maybe not actually since task_0 and task_1 invoke a channel to task_2 so the tldr is task_0 and task_1 combine a channel to stop task_2, in case one of them finishes you can also use FuturesUnordered (looking at the doc) so instead of passing the sender to both of them, you can combine them in a future if you call .clear() it will drop any remaining yeah but in that case, all 3 futures must return same stuff right? which I don't need/want i got around this before by doing branches inside the macro so you wrap the future *inside* the macro branches and just ignore the return value you can use the return value for signalling I do the same by branching the smol::future::or it werks ok as you wish Yeah avoid select!() if you can We can eventually get rid of the futures crate was reading through the previous convo you all had, want to ask why this required a change: convert a blake3 hash to pallas::Base. Was the current way in model.rs wrong? antikythera: brought it up initially the range of pallas::Base is [0, p-1] where p < u256 (=32 bytes) so for those values produced by blake3 hash which are [p, u256::MAX], they get mapped to [0, u256::MAX - p] so those 32 bits of pallas::Base are hashed to more frequently an ideal hash function is perfectly uniform. all the crypto is based on this assumption. if the assumption is improper, it opens us up to bad unforeseen attacks interesting, so it's a legitimate security concern above anything else? blake2 is more secure but slower than blake3 ACTION just parroting what they/them read on wiki reading about it now https://github.com/BLAKE3-team/BLAKE3 Title: GitHub - BLAKE3-team/BLAKE3: the official Rust and C implementations of the BLAKE3 cryptographic hash function never heard of this hash function though, I only did one cryptography elective at university and don't remember being taught this (but we had stuff like DH key) check out the spec being written in doc, it might be useful (still unfinished) : @Dastan-glitch pushed 1 commit to master: 45f5bd506f: dao model: add note about blake2 hash function usage nice, will do thanks is blake3 the standard that's used throughout crypto? there's no standard hash function but several trusted ones including blake if there was a standard, it's probably the sha functions I see : @Dastan-glitch pushed 1 commit to master: ac3f29036e: book: git mv spec2 spec : @Dastan-glitch pushed 1 commit to master: ead4b2338f: book: botched move lol yeah ffs i did mv spec2 spec/ ... talking with devs and trying to link them the spec upgrayedd: How do I advance the height after executing the pow reward tx? 14:28:16 [ERROR] Internal error getting from slots tree: Slot 1 not found in database brawndo: https://github.com/darkrenaissance/darkfi/blob/master/src/contract/money/tests/pow_reward.rs#L131-L133 each time you increase the current_height you generate its slot ah gotcha, it's the generate_slot fn I was missing I now we have mixed pow and pos naming :D future proofing is a bitch XD Also the first generate_slot() should be for slot 0, right? yeah but we already have that baked in the harness so its there as its part fo the genesis block so the harness/db already has it so you first generate_slot should be for slot 1 hm ok assuming you started with current_slot = 0; current_slot += 1 or current_slot = 1 : @parazyd pushed 3 commits to master: dc0a1fb134: contract/money: Ignore benchmark tests when running test units : @parazyd pushed 3 commits to master: efedcbf856: contract/money: Rename "integration" test to "token_mint" : @parazyd pushed 3 commits to master: e659f2c6d0: contract/money: WIP complete integration test s,current_slot,current_height I'm doing something wrong Can you check in money/tests/integration.rs ? last pushes? git HEAD I tried with various combinations of height/slot to no avail (Can run with make test-integration) Probably wanna enable the debug log as well This is very strange: [DEBUG] (2) runtime::vm_runtime: wasm executed successfully [DEBUG] (2) runtime::vm_runtime: Contract returned: I64(33) ah no nvm it does catch an error your first mistake is created a pow_reward_tx for slot 0 It's for slot 1 th.pow_reward(&Holder::Alice, None, verification_slot + 1, None)?; yeah but you have to generate the slot before executing the tx oooh https://github.com/darkrenaissance/darkfi/blob/master/src/contract/consensus/tests/stake_unstake.rs#L52-L53 https://github.com/darkrenaissance/darkfi/blob/master/src/contract/consensus/tests/stake_unstake.rs#L66-L68 https://github.com/darkrenaissance/darkfi/blob/master/src/contract/consensus/tests/stake_unstake.rs#L70-L75 in this example we start at height 1 Lemme check we aidrop some tokens to alice then stake those tokens progress height after grace period and generate that slot then execute a proposal for alice so the proposal is verified against the slot after grace period we created : @parazyd pushed 1 commit to master: d0eac00cea: contract/money: Fix slot generation in integration test ^ Thanks :) yeah that should work now It passes(TM) I know its a bit tedious, but I think its better than having to handle consensus/blocks with this you just simulate current state in a height yy it's fine now that I understand it hooray :D my mind is a bit mushed smol and chanells don't wanna play together I think the route is to create a separated StoppableTask for what I'm trying to achieve and these tasks communicate with the channels so not using smol::feature::or What's the idea? we need a listener to listen for incoming proposals from the network so when receiving them, we append them to our current state, and then reevaluate if current best fork has changed so we notify the mining threads to stop and start mining the new best fork meh What happened to having a separate miner program? Mining in darkfid will inherently slow it down a lot Because async will not work properly since the threads will be taken by the miner lol thats exactly the issue I have now fuck I forgot about that idea darkfid sending jobs to the minerd FUCK kek Yeah they can communicate over jsonrpc You can even build an API that works with xmrig (But that'd have to be HTTP so probably better not) well I will do it for our native stuff first and we improve after http? lol Yeah because monero daemon did it ok no worries, this actually saves me a lot of headaches trying to make all the threads play together nicely we just need the darkfid to be the state/network observer and simply send requests to the miner :) darkfi-saurond right now what was happening was all the threads being stuborn since async etc I was going to name it barad-dur Two Towers reference :D darkfi-911.exe since barad-dur is the saruman tower that was mining the forest to create the uruk hais XD Yeah oh no barad-dur is the sauron-eye tower If you're combining async and threaded stuff, you'd usually leave 2 or so threads available for the async executor But obviously, since we can, better to delegate to separate program orthanc is the saruman one Then the kernel will handle it better than we can yeah the problems rise when you try to combine async triggering normal threads so better leave the kernel handle it indeed since it will also decouple/debloat the main daemon ++ : @parazyd pushed 1 commit to master: 13bd090150: contract/money/integration: Gather block reward owncoins : @parazyd pushed 1 commit to master: 036afda345: validator: Configurable fee verification, incomplete... was offline, back now test test back bye : @lunar-mining pushed 2 commits to net_hostlist: c7cf7d861d: lilith: change no hostlist warning to fatal panic : @lunar-mining pushed 2 commits to net_hostlist: 639f1f72bf: store: fix and simplify tests brawndo: around? upgrayedd: I got my code compiling with make clippy, had a to sort out some dependency issues. I did get 4 warnings unrelated to my code changes though, does this matter? Here's one example: https://pastebin.com/NA3Bh9Ak Title: warning: using `clone` on type `Coin` which implements the `Copy` trait --> - Pastebin.com deki: you should rebase master first, so your changes are always on latest master head ah right forgot to do that you know how to do that from upstream right? yes will sort it out now, then recompile : @aggstam pushed 2 commits to master: dc882d256b: validator:pow: decoulbed mine_block() from PowModule so it can be used outside of it : @aggstam pushed 2 commits to master: a27725b58f: script/research/minerd: miner daemon skeleton : @aggstam pushed 1 commit to master: 09137d4633: script/research/minerd: handle new request trigger using smol channels brawndo: check this out ^^ deki: thats not how you rebase upstream... upgrayedd: yeah I stuffed it up, trying to force push to overwrite it but I don't think I can should I just delete the branch and do it again? no you can just reset head to here 0152cd422509bbe8ccd1e5a689575facfa876a1d and then do it properly rest head, aka delete all those commits ok so I'll do: git reset --hard 015cd... then do a git push origin *branch name* --force ? yy and for proper rebase: sync your fork master with upstream master, then on your branch: git rebase master && git push -f ok ty that all worked out fine, thanks for your help also ran make clippy and that compiled with no errors gonna go to sleep here, gn test test back Say I want to start a network of local nodes, how do I go about that? Do I hardcode some bootstrap nodes, or can bootstrap addresses be provided as params? How can I check they actually connected? namaku: define local nodes, you mean a local network of your own? upgrayedd: yes, for example for testing namaku: have you checked here? https://github.com/darkrenaissance/darkfi/tree/master/contrib/localnet ah no, thanks. But there are no docs. Is `tmux_sessions.sh` the script to start the network? namaku: what docs do you need in there? its a single script(yeahs the tmux_session.sh) and each nodes config(which describes what each line does in comments) ok ok. I would put at least "run `tmux_sessions.sh` to start this network" (and possibly add requirements if there are any). But that's me. well those files are mostly used by devs to simulate networks, therefore its assumed to know how to run a script and/or have requirements already setup, along with the corresponding binaries but always, feel free to contrib more information if you feel it should have more stuff :D this assumes only project members run local networks. I would dare to say that is not very contributor friendly. For example, DAO builders might want to run local networks How does it assume that? if anything, it assumes that you understand what these files are saying/doing How is that "not very contributor firendly"? I am not trying to be fussy but "those files are mostly used by devs to simulate networks" etc suggests that to me yeah but I'm trying to understand how you are arriving to that conclusion? I didn't say ONLY There should be a docs chapter "how to run your own local network". I know because I was responsible for such docs in my prev project Did you miss this message? upgrayedd | but always, feel free to contrib more information if you feel it should have more stuff :D no, but I first had to ask you to get a link to the codebase, which got us to this conversation and what that supposed to be? the folder was not like hidden or anything s,be,mean You may know the codebase by heart, but "search the whole codebase and you will find" (how do I know I will find?) isn't really contributor friendly in my opinion. Again, not here to start a dispute. This also IS a contribution. lol nobody is disputing, I just don't understand the logic ok a simple recursive grep of the word "localnet" would point you to those files you don't need to know the codebase by heart even simpler, search "localnet" in the repo host(github, codeberg) would point there again I just don't get, how you can come the non contributor friendly statement without doing the bare minimum (and the light gashlighting afterwards) and lastly, since you mentioned docs, did you even checked them? https://darkrenaissance.github.io/darkfi/testnet/node.html?highlight=local#local-deployment Title: The DarkFi Book (again pointing to same scripts) maybe coz I felt lightly gaslighted first (it's a single script! do the bare minimum!). Stopping here. lol how did me describing the folders content felt gashlighing? anyway good laugh thanks :D upgrayedd: are my latest changes for the PR okay? I've run make clippy, running make test and it's passing everything so far (taking a while though) has anyone got this project compiling on a macOS? Or is it only meant for linux distr? greets greets anyone else follow Matthew Green? He's a cryptographer and professor, if you use twitter he's worth following, although he doesn't always talk about crypto stuff, he does have a blog: https://blog.cryptographyengineering.com/ Title: A Few Thoughts on Cryptographic Engineering – Some random thoughts about crypto. Notes from a course I teach. Pictures of my dachshunds. gm hey deki: you should squash the commits to a single one ++ skoupidi: sure, making sure I've got the right command: git rebase -i HEAD~n or git merge --squash? deki: git rebase one, as you want to squash them in your own branch first, not merge them to master upgrayedd: okay thanks to verify you did it correctly, the resulted commit changes must the identical to current PR ones https://github.com/darkrenaissance/darkfi/pull/248/files Title: fixing hardcoded value for decimal places to constant by deki-zedd · Pull Request #248 · darkrenaissance/darkfi · GitHub okay just squashed all commits into one with a single message : @aggstam pushed 1 commit to master: 7c9b3549cf: darkfid2: use minerd to mine blocks, validator: cleaned up threads info as its not longer required anyone here use Cairo? It's very similar to Rust syntax wise used for Starknet which is a layer 2 for ethereum : @Dastan-glitch pushed 1 commit to master: f46eb4c0e4: src/event_graph: request and reply multiple events upgrayedd: ^ chaotic sync down to 3 seconds from 7 00:24:02 [INFO] [EVENTGRAPH] Fetching events 00:24:05 [INFO] [EVENTGRAPH] DAG synced successfully! : @Dastan-glitch pushed 1 commit to master: dd43ff2bfd: remove unused import hihi gm : @Dastan-glitch pushed 1 commit to master: 6fb5083a4e: spec: change from blake3 to blake2b and add explainer why congrats dasman, lets pick up the pace so we can ship mainnet gm or good evening from down under did you guys change to blake2b because it's more secure? I remember discussing it a few days ago when blake3 came up https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#hashing-to-fp Title: The DarkFi Book check your logs and the comment in the code ty dasman: yo gj! have some comments tho: https://github.com/darkrenaissance/darkfi/blob/f46eb4c0e4d8a6cb53e1fc8a5b8b52fed1bc8547/src/event_graph/proto.rs#L416-L442 1. in this loop, you should use events reference, don't clone the vec 2. genesis timestamp should be retrieved outside the loop once, so you don't constantly hit the RwLock on each iter 3. same for bcast_ids, grab the write lock outside the loop once, unlock after all iter has finished 4. I don't see the request/response vector size limit 1-3 are memory/locks optimizations, 4 is prob missing impl antikythera: are we chaning the blake2b everywhere? if not: "DarkFi uses BLAKE2b" is wrong, as its just the DAO, where everwhere else we use blake 3 https://github.com/darkrenaissance/darkfi/blob/master/doc/src/spec/crypto-schemes.md?plain=1#L89 upgrayedd: does my PR check out ok? Asking so I can go onto another task deki: code wise looks fine, although I'm not sure how you tested it compiles/works I ran make clippy and make test, both passed with no errors if that's what you mean? drk is not a workspace member, it won't be included in those ah I see we can leave it at limpo until drk is integrated again back to workspace upgrayedd: thanks, will correct that actually (re-reading) the section just claims we use blake2b instead of blake2s if we use blake2s, we could then add we use that as well (it doesn't say anything about blake3) antikythera: yeah the correct approach is to specify what is used where upgrayedd: ++ ty dasman: mine went to 2sec, don't recall what was before tho upgrayedd: yours was 5 noice so >2x speedup yups :D I reckon the limit will lower it, but still thats impressive if you consider we went from unknown time(due to eternal loops, deadlocks, etc) to under 10< for 100k messages sync (don't forget this is optimal net conditions, everything is local, we just test the sync algo) correct, I'm aware of that will make the above changes and test online, and we'll see dasman: lmk so we can test it together, also over tor our initial testing was good, so I reckon it won't change it terms of correctness but you never know especially since a p2p upgrade is comming : @Dastan-glitch pushed 1 commit to master: 9257e01e35: spec: DAO::mint() and DAO::propose() : @lunar-mining pushed 3 commits to net_hostlist: 40619581cd: store: reduce LOC in hostlist queries and update usage.... : @lunar-mining pushed 3 commits to net_hostlist: 765bd819b2: net: change unwrap() to expect() on hostlist queries : @lunar-mining pushed 3 commits to net_hostlist: 3f51d80438: chore: fix test fixes : @Dastan-glitch pushed 1 commit to master: befba39321: spec: reword section on blake2 : @Dastan-glitch pushed 1 commit to master: d3be6c2819: src/event_graph: aquire locks outside loops upgrayedd: what exactly do you mean by vector size limit? limiting the number of requested events? yeah have a max value thats configurable by the operator, but always less than a hardcoded constant max also I think Vec::with_capacity() boosts things up a bit, but I'm just testing the extreme case, what do you think? is the vec size(or max size) already known? then yeah it helps why tho? the number of parents (missing_parents) is deacresing as the tree grow wait a sec lets say I have a fresh node and I want to sync the tree I grab everyones current tips and start going backwards from there that vector goes until were? until you get Null ID Genesis yy the problem is, that here: https://github.com/darkrenaissance/darkfi/commit/f46eb4c0e4d8a6cb53e1fc8a5b8b52fed1bc8547#diff-45851d09ef007cfcf178e5079f16af4a026efd24195932064b2afb87983b1c94R72 Title: src/event_graph: request and reply multiple events · darkrenaissance/darkfi@f46eb4c · GitHub I can nuke your node with a veeeeeeery long vector effectively ddossing you so by setting a max limit, we protect these cases + another way to find mallicious nodes so we have a config value of how many we request from others, sending it along our request and the other node checks that that value is less than the hardcoded max everyone follows similarly: https://github.com/darkrenaissance/darkfi/commit/f46eb4c0e4d8a6cb53e1fc8a5b8b52fed1bc8547#diff-45851d09ef007cfcf178e5079f16af4a026efd24195932064b2afb87983b1c94R77 Title: src/event_graph: request and reply multiple events · darkrenaissance/darkfi@f46eb4c · GitHub when we receive the response, an attacked can flood us with events so we can quickly check that the response vector is <= the max we have(which we requested) so if I ask for max 10, the response must contain max 10 okay makes sense against ddos, but just to make it clear, we don't request all the events, we request missing parents as vec and reply them back oh so that vec is for a single events parents? correcr then the max is already set, iirc 5 correct that's why I was confused which we should again check for consistency so in the request we ask for some events parents, or their id directly? yeah you request missing parents Ids for a single event tho right? yes so we went from asking each parent individualy, to asking them all in one request right yes you know you can speed this up even more right? how? instead of asking a single events parents, ask for multiple but I guess since we ask different nodes for different events that wouldn't make sense but again you're syncing you go backwards you don't have events more than you get yy I was thinking something like: lets say tips are 10 ask for all 10 at once for all their parents so lets say each one got 2 parents, you will get max 200 parents back but right now we loop and ask different node for each event right? you could make something like: N / number of nodes, and ask each node for all those yes, once you get it you break the loop and go to the next one so if you have a single peer, you ask the parents for all 10 events you got if you have 2 peers, ask the first one for the first 5 parents, and the second one for rest 5 and go on and on backwards aha that will be cool, I'll work on it you see the speedup now? yy good I don't think it introduces complexity, since right now for example if you got 2 peers, you will ask: first peer: [0, 2, 4, 6, 8], second peer: [1, 3, 5, 7, 9] but that would happen as individual requests inside the loop divide and conquer :D while you can ask each peer directly for the tips since you already know how many requests you will make to them so instead of requesting first peer for 0, after a bit for 2 ... you ask directly for all of them hence why we also need limit, since the response must always be <= 5 * request_vec.len() it only needs a good handling in case a peer doesn't respont but I guess that means the tips will stay in the map and will ask on next iter Okay now I'll just divide my requests between peers and see what happens, I'll keep you updated noice, just introduce some randomness in the requests, just for extra entropy like don't do: first: [0,1,2,3,4] second [5,6,7,8,9] #mansloveentropy yy gm greets : @Dastan-glitch pushed 1 commit to master: bba2c5472a: make DAO nullifiers the same as money, otherwise we can't detect whether the coins we're using were already spent or not. Having access to a set non-membership merkle tree here would fix this. gm, I've noticed a lot of places in the code where it says 'TODO' or 'FIXME', what's the protocol for doing these? Do you create an issue in the github repo, or just create your own branch and notify people here? deki: just ask here before you start in case someone else also working on it okay thanks or if anyone has tasks for me, I'm open. Keep in mind I've only recently started with Rust (but know python/C some C++) everything is here https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: The DarkFi Book : @Dastan-glitch pushed 1 commit to master: fc68e1b113: spec: DAO::propose() nullifier : @Dastan-glitch pushed 1 commit to master: 9ec277abee: spec: add money contract with money transfer : @lunar-mining pushed 6 commits to net_hostlist: b38a1267fb: store: remove redundant else clauses : @lunar-mining pushed 6 commits to net_hostlist: e40405a257: store: bug fix... : @lunar-mining pushed 6 commits to net_hostlist: 3abd2c62bb: net: don't hide connection upgrade inside perform_handshake_protocols()... : @lunar-mining pushed 6 commits to net_hostlist: bd0c7684c8: outbound_session: replace downgrade_host() with `rejected` vector... I have a quick question for any devs. The chat logs that are on agorism.dev/logs, anyone know how are they being pushed to the server? I've read weechat docs, so i know where my own logs are, just wondering what the best way would be to mirror/host them? yevesedwards: its a simple irc bot listenning in configured channels and appending each message to a log file, then you expose that log file on the server yvesedwards: check here for inspiration https://github.com/darkrenaissance/darkfi/tree/master/bin/darkirc/script awesome, thanks for the info hey yvesedwards, welcome hey run !list No topics !topic net upgrade status Added topic: net upgrade status (by rún) I'm trying to create a "kbei create a "bridge" that doesn't use telegram, not even sure it's useful, just fucking around, just need a better way to host it http://nkl5i6mlugtmc22mfhmhmhikekjvfnnsa7u5546cfzei3hxqq7nhjvid.onion/darkfi/ yvesedwards: lol so you wget the log file each time from agorism.dev? well for the test I have, that's why i was asking about hosting logs it's not dynamic, only gets text file when the page is loaded you can change the text file it points at and it's basically a template for any chat that has a hosted .txt log gm gm gm : @Dastan-glitch pushed 1 commit to master: 3e04487447: spec: DerivePubKey() brawndo: can u share your pubkey please gm lou: 6NpTuikk64ejox5h7TyRJG7F3x1ea3tYWs7KwMvFw1HE Share yours as well pls !list Topics: 1. net upgrade status (by rún) ty brawndo 5J7EBWjYGkp9FnjSxQBz4Ydw7LWDBsLLAF5wDaUJ9ivn test test back lou: I think I set it up so you can DM me whenever ++ hm something weird's happening Do you see my msgs? I can try messaging someone if you're still having issues What's your dm pubkey? Mine is 6NpTuikk64ejox5h7TyRJG7F3x1ea3tYWs7KwMvFw1HE 5oyX9YVuLbi1SGyiAt9yGzR7nYqaRHG241qcjE9z5rzm ok restarting brb' test test back ircd::irc::client: [P2P] Decrypted received message: Privmsg { id: 2355642742537020626, nickname: "deki", target: "deki", message: "test", timestamp: 1705316382, term: 0, read_confirms: 0 } I don't know why I'm seeing this in my log sorry just adding you to my .toml file now sure sure just sent you a dm hm yeah not seeing it [contact."deki"] contact_pubkey = "5oyX9YVuLbi1SGyiAt9yGzR7nYqaRHG241qcjE9z5rzm" I added this and restarted yes that's right, same for you I have [contact."brawndo"] and your pubkey Weird I'll restart ok back, had to restart pc wouldn't show the channels ok it works now Thanks sweet brawndo: u seeing my dm's? lou: Yes gophers://bitreich.org/I/memecache/c-vs-rust.png true true !list Topics: 1. net upgrade status (by rún) : @parazyd pushed 1 commit to master: 64c80377a2: contract: Move POW_REWARD constant to money contract upgrayedd: How come in the code there's a REWARD constant of 100_000_000 and in the test harness where I created a block, I got a coin of value 2000000000 ? brawndo: that constant shouldn't be used https://github.com/darkrenaissance/darkfi/blob/master/src/sdk/src/blockchain.rs#L134-L155 use this ohh Damn ok iirc pos still uses the constant as we haven't implemented post-pow stuff so I left it there as is Alright, I'll revert the commit I just made just nuke the head XD nah I'll add a comment to the revert commit ++ hey guys I won't make it to the meeting, it's past midnight here so going to sleep. Haven't taken on another task, just been going through the code to see what I can do No worries : @parazyd pushed 3 commits to master: e7f2b556db: contract/test-harness: Update VKS and PKS checksums : @parazyd pushed 3 commits to master: b13464f27a: Revert "contract: Move POW_REWARD constant to money contract"... brawndo: I don't think you needed to update vks/pks They changed In bba2c5472a6b29bc64bd8097d81508f25b0a6f4b Or rather they changed for me did you remove the bins before checking? Lemme check btw irrelevant: wanted to ask you about args parsing https://github.com/darkrenaissance/darkfi/blob/master/src/lib.rs#L75-L76 these I think not needed, since clap already prints them now to the juicy stuff when using flatten and same type of arg, like here https://github.com/darkrenaissance/darkfi/blob/master/bin/darkfid2/src/main.rs#L95-L105 Yeah flatten fucks it up when you try --help you will get error, as clap thinks its a duplicate arg I don't know how to fix it if you skip them, you can't use rest args for some fucking reason like for example to use custom config --config path doesn't work as it doesn't see the config argument so in order for it to work, your --help doesn't noice haven't digged further than this Perhaps we just need to not use clap :p yeah but the error with --help comes from clap as it doesn't allow duplicates, which makes sense so its a lose-lose situation XD : @parazyd pushed 1 commit to master: eecee6c829: contract/test-harness: Include Money::FeeV1 zk circuit for cached vks Why does it happen with --help ? --help will print all arguments along with their description What's the duplicate I mean? oh like in the example ah ic we use flatten BlockchainNetwork 3 times to define each network config That should probably be rewritten to use only one And then branched with exclusive flags: --mainnet, --testnet, --localnet aha like a command ./darkfid {mainnet, testnet, localnet} and then --args Or rather --network=mainnet we already have that it works like that right now you define the network using --network and then we grab the corresponding config via the flattened structs https://github.com/darkrenaissance/darkfi/blob/master/bin/darkfid2/darkfid_config.toml#L12-L13 https://github.com/darkrenaissance/darkfi/blob/master/bin/darkfid2/src/main.rs#L91-L93 ok yeah but rather than having those structs in clap, you can deserialize the toml yourself and find them We do that in darkirc for example oh like we do in ircd iirc Precisely to extract configured contacts Yeah Eventually I'll write a toml lib without deps so we can just do this shit on our own noice yeah that should work Probably just make bindings for this https://github.com/cktan/tomlc99 Title: GitHub - cktan/tomlc99: TOML C library Right now literally every toml lib has a soyerde dep yeah that should go btw I will migrate drk to use tinyjson not serde_json so we can make it working again ah yeah cool I'm not gonna do much in terms of design, I will just make it instead of calling darkfid like we used to, just use a db connector to execute the sql query directly(for wallet stuff) so we pretty much use the exact same queries/tables, just call darkfid only for blocks/blockchain stuff otherwise the whole wallet management should happen by drk itself ++ : @parazyd pushed 1 commit to master: 511b072c25: validator/verification: Allow fee call at any place in the transaction sup yo hi hi hai can someone run this? im outside !start Meeting started Topics: 1. net upgrade status (by rún) Current topic: net upgrade status (by rún) https://github.com/darkrenaissance/darkfi/pull/249 Title: Net hostlist upgrade by lunar-mining · Pull Request #249 · darkrenaissance/darkfi · GitHub rly short update that i've fixed the small changes brought up by brawndo, fixed a final bug and now this is truly gtg The test unit fails still https://github.com/darkrenaissance/darkfi/actions/runs/7519968805/job/20469120013?pr=249 Title: Net hostlist upgrade · darkrenaissance/darkfi@bd0c768 · GitHub So once that is fixed, we can merge congrats Also run a make clippy ok will fix the unit test, probs something small ++ congrats folks !next Elapsed time: 2.5 min No further topics gg gg gg cargo +nightly test --release --features=net --lib include-ignored this passes for me thread 'net::tests::p2p_test' panicked at src/net/tests.rs:74:6: called `Result::unwrap()` on an `Err` value: SetLoggerError(()) I think somewhere you have the logger reinitialized Use the init_logger() function oh forgot to commit changes to integration test btw the failing test seem to be the event_graphs one so since they use the p2p/net module, you should test those : @lunar-mining pushed 2 commits to net_hostlist: 5f83327aec: chore: delete unused methods : @lunar-mining pushed 2 commits to net_hostlist: 4f4e4fb5b3: net: small integration test tweaks as I assume once they finish testing, they try to shutdown the p2p module, triggering the hostlist save error ah i think brawndo was talking about a different test cos the path is net/test.rs where are you seeing that upgrayedd? assuming due to the previous logs being for event_graph tests https://github.com/darkrenaissance/darkfi/actions/runs/7519968805/job/20469120013?pr=249 Title: Net hostlist upgrade · darkrenaissance/darkfi@bd0c768 · GitHub in this error log oh no my bad lol thread 'net::tests::p2p_test' yeah its the logger error brawndo said discard my ignorance brawndo: i had to change dao vote nullifier to be the same as money which leaks anonymity ok i just committed the latest integration test file so it probs works now, waiting for github to finish running the test cos we test coin is in the tree but it might be a spent coin already we could use set non membership but i dont see how with incrementalmerkletree and i dont want to delay theres a workaround though, just do a money::transfer() call alongside DAO::vote() to upgrade the nullifier fyi I think we're fine with that "leak" for now rún: it will fail again, you should use init_logger or do something like this: https://github.com/darkrenaissance/darkfi/blob/4f4e4fb5b3ada51ce3238ff3cd56519dbb38df10/src/contract/test-harness/src/lib.rs#L77-L91 (read comment why) checking ok got it tnx finish here? or anyone has anything else to share? all good from my side all good, just wanted to share that info https://darkrenaissance.github.io/darkfi/arch/arch.html Title: The DarkFi Book which phase we on? 3? or 2? I would say dcon1 we are still integrating/developing stuff we're going backwards :) 1 step backwards, 2 forward :D !stop !end Elapsed time: 19.4 min Meeting ended Thanks all o/ market is pucking up bigly ACTION getting sweaty yeah shippin szn ty all see u cya : @lunar-mining pushed 1 commit to net_hostlist: ec5abf9683: net: make clippy + fix test https://github.com/darkrenaissance/darkfi/actions/runs/7531379104/job/20499820377?pr=249 Title: net: make clippy + fix test · darkrenaissance/darkfi@ec5abf9 · GitHub now i am seeing this: thread 'event_graph::tests::eventgraph_propagation' panicked at src/event_graph/tests.rs:164:9: i guess this is what upgrayedd was talking about earlier? and this: Error: Clippy had exited with the 101 exit code https://github.com/darkrenaissance/darkfi/actions/runs/7531379705/job/20499957040?pr=249 Title: Net hostlist upgrade · darkrenaissance/darkfi@ec5abf9 · GitHub but 'make clippy' and cargo +nightly test... work fine on my machine lain: clippy is propably runner bugged out run event_graph tests to check the failling one yeah it's the event graph test thread 'event_graph::tests::eventgraph_propagation' panicked at src/event_graph/tests.rs:164:9: Node 0, expected 2 events, have 1 so events failled to propagate the clippy thing seems non-legit cos the command it's using runs fine on my device cargo clippy --message-format=json --all-features --release -- -D warnings this passes for me: cargo +nightly test --release --features=event-graph --lib eventgraph_propagation -- --include-ignored oh hmm it works when i switch of the p2p test s/of/off lain: check if you use same ports perhaps those tests use same ports some some bugging is happenning when they run in parallel i wuz using same ports, but even when i use diff ports, it seems the tests still conflict some how which are the two tests? I can check wait wait trying some stuff https://github.com/darkrenaissance/darkfi/blob/net_hostlist/src/net/tests.rs#L125 just got it working both tests use 13200 as starting port yeah ik what was it? : @lunar-mining pushed 1 commit to net_hostlist: c0e23dca86: net: fix ports on test just the port i think, rerunning tests now : @aggstam pushed 1 commit to master: f78f1e018d: darkfid2: parse network config directly from the config file, not as flattened arg brawndo: ^^ a bit hacky, but it is what it is hey upgrayedd, can you drop you pubkey, wanna dm ya sadar: FfrD6FVbQZmbA5TQVxeRra4cAryH8ijC9G3r2BieFzSs give me yours C9vC6HNDfGQofWCapZfQK5MkV1JR8Cct839RDUCqbDGK sadar: ready when you are do you see my message? sadar: nope sadar: did you save the config file and restarted ircd? upgrayedd i dropped the wrong pubkey heres mine: 37m4LDGUEaRN4pPf5Mpxp7dphEbRy3Z5EzVbn7u8txWP sadar: ready when you are greets greets gm https://github.com/darkrenaissance/darkfi/actions/runs/7531918209/job/20501526117?pr=249 hey so this check is failing but it's something to do with consensus/ validator stuff, not the network code Title: net: fix ports on test · darkrenaissance/darkfi@c0e23dc · GitHub error[E0433]: failed to resolve: could not find `validator` in `darkfi` same error here: https://github.com/darkrenaissance/darkfi/actions/runs/7531918646/job/20501526993?pr=249 Title: Net hostlist upgrade · darkrenaissance/darkfi@c0e23dc · GitHub finally there's also this clippy check that's failing https://github.com/darkrenaissance/darkfi/actions/runs/7531918645/job/20501526770?pr=249 Title: Net hostlist upgrade · darkrenaissance/darkfi@c0e23dc · GitHub but 'make clippy' works fine on my side output is kind of a mess but if you grep "error" in the above output it leads to validator errors also e.g: ../contract/deployooor/darkfi_deployooor_contract.wasm: No such file or directory (os error 2)\n --> src/validator/utils.rs:92:13\n brawndo: pub const VALUE_COMMITMENT_PERSONALIZATION: &str = "z.cash:Orchard-cv"; thats in sdk constants/fixed_bases.rs should i put that in the spec? I assume halo2 doesn't allow us to custom this : @Dastan-glitch pushed 1 commit to master: f9a8b41657: sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64() gm 09:49 brawndo: pub const VALUE_COMMITMENT_PERSONALIZATION: &str = "z.cash:Orchard-cv"; 09:50 should i put that in the spec? I assume halo2 doesn't allow us to custom this Can be changed to whatever, but should be in the spec, yeah I don't like this commit: f9a8b41657 You don't understand how it works and you just made a random change So please revert it pedersen_commitment_base will take a base field element and specifically uses the NullifierK generator since that has special properties inside the ECC halo2 gadget The same way pedersen_commitment_u64 will take a u64 element and use that specific generator in order to perform a 64-bit width range check They're specifically different functions because the _u64 function implies the assumption that you shouldn't commit to something larger than 64 bits ok reverted i ran make test on the repo and it passed : @Dastan-glitch pushed 1 commit to master: 5fb0913cb0: Revert "sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64()"... what's special about NullifierK inside halo2 gadget? abstractly they are both group generators and have the same mathematical properties. btw is it ok for me to change use the terminology block instead of slot? It's not about the mathematical properties It's the way they're written in code ok _u64 will enforce a 64-bit range check and uses a different generator to be able to trigger that There's a reason why those constants are implemented https://github.com/darkrenaissance/darkfi/blob/master/proof/opcodes.zk#L5-L7 got you, thanks for clarifying np upgrayedd: what terminology do we use instead of slot? should i say "block height" or "block index"? haumea: block height wdym change the terminology block instead of slot? yo can i merge the branch re: above msgs. the failing tests are related to consensus/ validator, not network code lain: lmc lain: thats not a test error test all passed thats a build features error(make check) which exists for some time in master yeah i meant make check it's failing on a validator thing so can i merge some feature import is missing haven't bothered to check XD i just see 'validator' and go 'not my problem' lol lain: did a quick skim of the changes https://github.com/darkrenaissance/darkfi/pull/249/files#diff-589965138a6b5fa23381966bbd486bcc4e4e6baa774806492d85b60b3386902cR87 Title: Net hostlist upgrade by lunar-mining · Pull Request #249 · darkrenaissance/darkfi · GitHub this file should be using the app name since right now, iiuc, every app that hasn't configured a hostlist file path will use that one yeah correct aside from lilith which is forced to configure a hostlist per spawn we haven't ported the other apps over yet quick q: upgrayedd: currently in DAO/spec i use the term slot everywhere, but you told me it's obsolete will change it now s/slot/block_height/ haumea: I never said its obselete lol ah is it ok then for me to use slot then? I said in PoW its block height or you prefer i use slot? in PoS its slot id I always prefered block height and you know it :D ok gotchagotcha lain: so the tldr logic is: grab random peer from greylist, if they don't respond remove it from greylist, if they respond add to whitelist and remove from greylist, correct? it should use path::config_dir() tho instead of .config/darkfi as default re: above that's the refinery yes how do nodes get added to graylist? protocol seed and protocol addr nodes send their whitelist (AddrMsg), and when we receive the Addrs, it's added to greylist yeah thats on startup how will this scenario go: not just start I create a node, advertise to lilith, running all good in outbound session, if we don't have connections, we reseed plus protocoladdr is running continually iirc so its lilith responsibility to keep track of live nodes? every node keeps track of nodes but that's lilith's special focus yes aha so in the scenario, I create a second node, connect to lilith, connect to my first node what happens if after a while my first(or second) node drops? will my other node append it to greylist directly, or ask lilith and hope it has checked that node for liveness? lilith's refinery should downgrade the connection to greylist then it doesn't get sent during seed process lilith is not connected to that node doesn't matter lilith has it in the hostlist, refinery pings nodes in the hostlist yeah after a refinery_interval right? yes we can adjust this in settings rn i think it's 10s so the question is, does lilith/node pings all whitelisted members after that interval? ah no it just does that to the greylist exactly! :D sorry for going the long route to get to it just wanted to get you through my thought process yeah i get you so previously, we had a method called downgrade connection so if I understand it correctly, a normal node whitelist is just their connected peers right? which would downgrade a connection from white or anchorlist when we couldn't establish a connection. however i removed this cos it's not in line with the monero impl only lilith has a total "network view" of all whitelisted nodes right? no whitelist is just created from greylist we ping peers and promote them what you are talking about is "anchorlist" anchorlist is when we've actually managed to connect to a node aha, so we got a subset of liliths anchorlist right? no lilith just shares its whitelist all nodes just share the whitelist so everyone got everyone? lilith is not doing anything special aside from running inbound nodes on multiple networks everyone has a hostlist with grey, white and anchor connections ok just trying to understand current topology ppl share the whitelist, and save the greylist when the receive whitelists s/ppl/nodes since if slots counts is big, the network pretty much becomes a mesh where everyone is connected to everyone correct? anchorlist is never shared, it's just remembered by nodes and they try to reconnect to anchor nodes on startup upgrayedd: that depends we have a new flag in settings called 'advertise' if advertise is set to false, you don't broadcast your address so you could be an island just making outbound connections aha ok so anchorlist is pretty much a predefined list of known nodes, and we append to that our whitelisted ones so we remember them on next run right? it's seperate to whitelist it's its own list yeah read what I'm saying we append to the anchor list our whitelisted nodes so basically on outbound connect, we first try to connect to anchorlist, then whitelist, then greylist no there's no appending oh its just in memory? it's stored on disk and loaded to memory as 3 seperate vectors gotcha gotcha gotcha, so anchorlist is pretty much hardcoded as it never changes correct? we remove addresses from the anchorlist if the channel stops session::remove_sub_on_stop() ok so lets go over a scenario: my anchorlist has 5 nodes, empty whitelist/greylist I start my node, with 5 outbound slots, which get filled by my anchorlist will I ask seed? I guess yeah and fill the greylist with the response and then will ping greylist and upgrdare to whitelist does that sound correct? if you've only 5 outbound slots and they all get filled then the outbound session won't trigger the seed sync but if 1 doesn't work, then that slot will do Peer Discovery and do the things you mentioned above wait tho, so I will only advertise to my anchrolist, not to lilith/seed? if you have advertise set to 'true' you will send your addr to the seed node (protocolseed) and other nodes (protocoladdr) ok good continuing the scenario now my anchrolist and whitelist should both have 5 nodes, and empty greylist when I start again, I set slot to 3, so I grab some 3 random nodes from anchorlist, upgrade them to whitelist(already exist there) and I'm good no there's no interaction between anchorlist and whitelist the question now is: how I'm checking the 2 other nodes that exist in my whitelist are live? so my whitelist will still be empty? whitelist will only be empty if the refinery hasn't started yet !topic funcid in coin Added topic: funcid in coin (by haumea) but if we only share whitelists, how I'm I going to share my anchor nodes to another peer? you don't share anchors anchors are only for you aaa like manual peers yeah kinda but used in Outbound connections rather than manual ok, the question is still valid tho, how will we check whitelisted remaining nodes yeah rn we don't do this, there is no "whitelist refinery" we assume if they are on the whitelist it means they are safe, and there's no way to remove them if not i believe this is in line with the monero impl yeah but its not safe since we use pick random nodes from whitelist there is no determinism that a node won't be removed from there hence why lilith we used the ringbuffer to ensure we check everyone at some point just to be clear we don't pick from the whitelist directly yeah but we store it we do: anchorlist, if not anchorlist, whitelist, if not whitelist, greylist we send it not store we store in greylist store in disks yes so on my next run, whitelist already has nodes but when we receive a whitelist, it's stored to grey, and must be pinged that we never check for liveness that's true I'm talking about our own whitelist, not the one we get we never get a whitelist just for sake ofclarity but yes, if we load it from the disk and it has gone offline i see the problem so there's 2 solutions: we save to the greylist (i.e. load the whitelist from disk, append to greylist) making it part of the refinery or we add some kind of downgrade method for if we can't connect rn if we can't connect we just chose annother peer i can check again the monero impl the problem with 1 is that, lets say they node is active when we check it, so we push it to our whitelist, our slots are filled, so we will never check it again but i believe they do not do any "whitelist refinery" for 2 its the same, if our slots are filled, we will never check rest whitelisted nodes no it doesn't relate to slots if slots are filled, no peer discovery is happening if our slots are filled, we don't do a seed sync (this is not my impl btw, preexisting logic) yeah whitelist doesn't relate to peer discovery greylist is the result of peer discovery yeah but we use that greylist to fill the open slot hence the connection peer discovery -> recv greylist -> greylist refinery -> add to whitelist sorry i dnt follow aha so when we try to fill slots, we only check whitelist right? no we do this: anchorlist, if not anchorlist, whitelist, if not whitelist, greylist the % anchorlist and % whitelist are set in settings so you can say 100% anchorlist plz or 100% whitelist but if there's not enough slots, the algo will still look for slots on the greylist same in monero impl ok sure, but still the problem still remains we never check node liveness once they enter whitelist yeah doesn't matter how they got there i offered 2 solutions above but also monero doesn't do this yy I said refinery is the best option since you want that happenning in runtime not just oneshot and hope for the best ok so everything non-anchor becomes greylist on shutdown (and therefore start) yeah what this means is that the nodes will reply empty Addr message in the time interval before running the refinery but indeed it's safer still imo greylist ping check shouldn't be random, it should be a ringbuffer like we had in lilith can u explain this monero does random since for example this scenario can happen: I'm looking to fill a slot, grab random greylisted peer, they are off, I remove them next peer I grab they are good, I connect to them, ask their list to append to greylist my next iter might brab that peer again ah so ring buffer is more efficient cos no chance of selecting same peer? since it was also in my peers greylist but they didn't remove it yeah the ring buffer always appends new to end so overtime you ensure to have checked everyone at least once ok sure check how it was done in lilith ++ the periodic_purge thing yeah i remember so in the greylist case, you just always pop front : @Dastan-glitch pushed 1 commit to master: f357f1778b: spec: add money coin, current day, pedersen commits fyi that "make check" error also exists on master lain: yy I wrote that earlier :D ah missed it kk i will make these 2 changes : @Dastan-glitch pushed 1 commit to master: 50522bd7db: dao/spec: rename all mentions of slot to blockheight : @Dastan-glitch pushed 1 commit to master: 8f405308b1: test-harness: s/slot_to_day/blockheight_to_day/ brawndo: here? the question is do we prefer explicit features set like ["zk", "rpc", "blockchain", "validator"] or since validator contains rest reduce to just ["validator"] i'd just make a small point here: sometimes when building something i specify X, but then it requires W, Y, Z which i have to add manually... that's kinda annoying It seems better to be explicit Since you have the possibility of a feature changing and not implying the other feat yeah I aggree explicit is better, just asked to be sure haumea: annoying in what sense? nvm overridden lol lol just asked of curriosity is it something like : import lib::* vs import lib::{foo, bar} What's happening with the p2p branch now? Is it done? tldr of discussion above: some minor tweaks re whitelisted peers liveness check are required current tests are passing, I'm fixing the feature issue now upgrayyed: can u check dm please whenever possible. lou: whats your pubkey? haumea: I changed the tx fee call to be able to be anywhere inside of a tx haumea: Since the fee call is 1 input and 1 output, and if we do not allow spend hooks, what do you think about making the fee call free? brawndo: thats illegal XD So it doesn't count towards gas cost We'll have to constrain it a bit so it is not abused, but it could be good upgrayyed: one sec Otherwise we can also make it cost money Because it can be (ab)used to make a transfer what's the benefit of no gas cost? it would make calculating the gas fee slightly harder but nbd brawndo: since nothing stopping me from havving 2 fee calls, aka first for tx fee, second to transfer, it should cost money i mean the cost is still there, but now it's hidden / not explicit upgrayyed: 5J7EBWjYGkp9FnjSxQBz4Ydw7LWDBsLLAF5wDaUJ9ivn lou: I got you, shoot whenever ready haumea: It would actually make the fee calculation easier, not harder If it were free But yeah there is space for abuse i mean looking at the cost of tx, as upgrayedd said upgrayyed: just dm'ed you. tell me if it u didnt get anything if it's free then it's subsidized by the network/miners do we want people paying fees? then by making it free, we incentivize that Fees will be mandatory it's not a crypto question, more a token eng one I'm saying every tx will have to have a tx fee call yeah but i mean more calls .etc lou: nothing, do you have my correct pubkey? upgrayyed: probably not. Can u send please lou: FfrD6FVbQZmbA5TQVxeRra4cAryH8ijC9G3r2BieFzSs ty lain: there is a call to make fees, but it has a cost. should it be a free call or calculated as part of the fee? is there a benefit to making it free Probably not Actually if it's limited in what it can do, I think it has a fixed gas cost in fact couldn't you spam free gas queries if so It can be constrained to 1 fee call per tx or stuff like that But nvm, just an idea if there's no cost, we just calculate the cost of tx inputs, otherwise we calculate cost of tx inputs + fee function call? If the fee call is not fixed gas, then: 1. Calculate the gas used without the fee call 2. Create the fee call to account for that used gas 3. Calculate the gas used _with_ the fee call 4. Replace the fee call with result of 3. 5. Hope that it's enough fee If the fee call is free OR fixed gas, then: 1. Calculate the gas without the fee call 2. Append the fixed gas needed for the fee call 3. Create the fee call to account for 1. and 2. . So provided that the fee call is constrained in the sense that it just takes 1 input and makes 1 output, and does not allow spend hooks, thinks become quite simpler ++ i think a small fixed fee is probably fine Not fixed fee in terms of price, but fixed in the used gas ++ brawndo: since fee call is the same across all txs, isn't the used gas constant also? Fixed fee can cause spam attacks like the problem that zcash had upgrayedd: Yeah that's what I'm proposing I thought it was a given, based on code XD ;) ok I'll start cleaning up the host wasm functions and get the gas pricing happen there do we still have spam risk if we have contraints on the fee call? ACTION chilling waiting make check to finish oh yeah sorry when i said simpler, i meant validator side wallet side is slightly more complicated but in general shifting complexity to wallet is better : @parazyd pushed 1 commit to master: 0e6f51e895: chore: Update copyright year in license headers lain: The spam risk is only if we do not have dynamic fee. e.g. charging 0.0001 DRK for ANY transaction But we won't have that. The pricing will depend on what the tx does brawndo: shouldn't we limit the 1 fee call per tx tho? so you can't transfer using that instead of money::transfer Yeah we probably should thats on validator side tho, in verify tx mhm I mean outside of wasm :D ah got it, yes fees should be dynamic, i meant the fee call itself could be fixed gas price for simplicity (as an alternative to it being free) instead of limiting to 1 fee call in a tx on validator side, can we instead add that rule to propagation rules in p2p layer? esp since it's related to fee rules / gas pricing anyway haumea: if the tx is not valid you don't propagate it i'm saying to just check the rule in propagation rules, not for validation tx being valid is the propagation rule there are node validation rules R, but there are also network propagation rules S S is a subset of R sorry i mean R is a subset of S but they are not equal S has more rules than R i'm saying the fee rules, including 1 fee per tx should go in S, but not in R does that make sense? no XD why? where does S exist? I mean who/what defines it the nodes define it as relaying rules, but not for validation it's for unconfirmed txs relayed in the p2p, but distinct from block validation rules They're the same rules ^^ you're checking the fee in the block validation? that doesn't seem like a good idea Of course Why? why have unchecked/unvalidated txs floating around? you misunderstood ok this is kinda important to understand 1. the miners decide the fees themselves (it's a fee market). so when we accept a new block, we don't check the fees of the txs in the block. No the miners do not decide the fees themselves so it's up to the miner to reject the tx or not depending on the fee set, not by the nodes validating the blocks (because then you would reject blocks which include txs that have insufficient fees) This is not Bitcoin the fee in the tx must at least cover the gas cost, so there is a hard lower limit that's what we check wait are we doing fee market or dynamic fee pricing like ETH? The fee is based on consensus And you verify the fees in order to validate the coinbase transaction, where you see what fee is burned and what fee is rewarded to the miner if that fee is sufficient, the miner will chose to include tx or not, but it must be valid in terms of sufficient fee to cover gas cost There is the minimum fee of the gas cost Anything extra is miner's tips ++ ^^ sure lets exclude the minimum, that's another thing we can't exclude the minimum thats what makes the tx valid or not i mean i'm not talking about the minimum right now you could have big fee txs or small fee txs we don't care about extras in terms of validity as long as fee covers base gas cost, its valid Yeah the extra fee is just MEV the point i'm making is that nodes do care about the fee yeah thats MEV as brawndo says whether they relay the tx or not it has nothing to do with validity + other conditions They should Otherwise they would relay spam txs all around the network That's such an easy attack i'm talking here about 'other conditions' for relaying txs in the p2p network Fill the mempool for free that's where fee discrimination should happen, not in block validation rules imho Why would you have 2 rulesets when you can have one? give an example of 'other conditions' It doesn't make sense in any condition to me because the additional rules let you add more restrictive conditions to txs without baking them into the validation rules You're fragmenting consensus its not fragmenting consensus And the codebase it's allowing stricter rules on the txs while keeping the validation rules to their minimum we can then later make certain rules less restrictive without affecting consensus rules give an example of such a rule in bitcoin this is called policy vs consensus rules https://bitcoin.stackexchange.com/questions/100317/what-is-the-difference-between-policy-and-consensus-when-it-comes-to-a-bitcoin-c Title: What is the difference between policy and consensus when it comes to a Bitcoin Core node validating scripts? - Bitcoin Stack Exchange ok boomer lmao We don't have the Bitcoin fee policy Nor I think we should ++ it's not just fee policy, it's other stuff like only having 1 fee call per tx i have no idea why you'd put that in the consensus 1 fee call per tx is a validity rule tx validity rule why not put it in the tx propagation rules? it allows you to be stricter with what kind of txs you allow, without compromising the consensus checks its already in tx propagation rules as the tx propagation rules is : tx be valid anyway i put the info there, you can read about it if you exclude it, I can create a block with a tx I haven't propagated that has no fee "The purpose of policy checks is generally to (a) close off DoS vectors and (b) to make future consensus changes safer to deploy, by preventing relay of transactions that would violate such future consensus changes well in advance." free tx look you want people to be able to make cheap txs and have them get confirmed, but they might be too cheap for people to propagate so only special miners will confirm them but there are other reasons to restrict certain unsafe txs or limit usage without adding specific consensus rules you don't want to put everything in consensus since it's final bruh we are talking about a very specific tx validation rule not some random rule the rule you are describing is: if fee is sufficient && fee > than_my_prefered_fee thresshold -> propagate haumea: You don't have a good understanding of our tx validation and it's leading you in the wrong direction the tx must always be valid when propagating, so the fee must be sufficient, thats what we check and want inside tx validation ok, well plz consider the info shared, i won't carry this on It is noted upgrayedd: if len(tx.fee_calls) == 1 -> propagate (not in consensus) that rule doesn't check tx validity, why would you propagate an invalid tx? if everyone has set that rule, the tx will simply get floating around for ever since noone would ever check it i mean in addition to the consensus rules, not instead of it yeah thats why I'm saying, sufficient fee is a consensus rule and consensus rule should always be checked before propagating if consensus_rules_is_valid() && ... && len(tx.fee_calls) == 1 -> propagate the ... stuff is p2p propagation policy, not consensus rules. It's an additional set of rules alongside the consensus rules. sufficient fee is a consensus rule, but my node might only propagate high fee txs this is needed to allow txs with high fee to go through in times of high traffic yeah I understand the usage, but thats "optimization" ruleset for net congestion, they don't change the fact of basic consensus validation that means that the node holds a "cache" of said txs to propagate at a later time we don't have that(yet) these rules, whatever they are, don't take from the fact that consensus rules must always be valid yeah but i mean you can also add rules there like disallowing certain host functions or certain calldata if we are worried something is unsafe also said tx, might be invalid later but without needing to add these restrictions to consensus yeah yeah, but still, thats a per node rule set, not global/consensus i was saying the limits on the usage of fee calls are better there instead of consensus the min fee thing is a global/consensus rule, hence it must always be true(aka tx being valid) no its not I gave an example why me mining a block with a tx I never propagated that is not valid for that rule when I propagate the block, since the rule is not a consensus one other nodes won't check it so what? wdym so what? you have 2 fee calls in a tx what's wrong with that if you mined it cheaper transfer so abusing the protocol yeah but what's wrong if it's priced into gas calcs? its not, thats the issue fee call is constant gas yeah but it's priced in (not subsidized by the network) + it's disallowed on the p2p layer so it's very hard to do its gaming the system, it doesn't matter if its priced in, then whats the point of having a money::transfer? + its not hard to do yes it is hard to do, you have to mine a valid block or find someone who does, then pay them to mine your tx https://github.com/bitcoin/bitcoin/tree/master/src/policy here for example shows how replace-by-fee is implemented using this too checks on tx calldata (script) to mitigate DoS attack vectors: https://github.com/bitcoin/bitcoin/blob/master/src/policy/policy.cpp#L177 there is another abuse: tx with empty fee, that means I can produce a block with free txs the fee call is there, but its not sufficient to cover gas cost then it fails the consensus rules and the block is rejected. the consensus rules MUST be valid for blocks to be accepted. I'm not arguing wether or not overlaying policies shouldn't be used/are not good, I'm saying that that specific rule, aka single fee call with sufficient value must be a consensus one, not a policy one i'm saying the limit on only 1 allowed, and not for example 2, 3 or more, could be a policy rule and imho should not be a consensus restriction I dissagree, on the premises on not making wacky (ab)use of protocol : @aggstam pushed 1 commit to master: 78a47053f1: contract/money/Cargo.toml: added missing darkfi validator feature upgrayedd: about the hostlist default path, are you saying it's better to leave the default empty and have be manually configured by apps? otherwise not sure what you mean by "this file should be using the app name" I mean the default one should use the app name(you can get that using the CARGO_PKG_NAME) ah kk didn't know that, tnx so for example if in darkfid I haven't configured a path, it should be : ~/.local/darkfi/darkfid/hostslist.tsv ++ in darkirc -> ~/.local/darkfi/darkirc/hostslists.tsv you got it :D and with config_dir() first i think for cross platform yy, just use app-specific folder ++ hence why I used .local not .config since these are runtime produce stuff, not init configuration ++ : @parazyd pushed 1 commit to master: e034470611: runtime: Minor comment cleanups and log verbosity. btw feature error is fixed, make check should pass now(TM) Sweet test test back sry disconnected for a sec nothing missed : @parazyd pushed 3 commits to master: 87a85e047f: runtime/import: Enable pages assertion in put_object_bytes() : @parazyd pushed 3 commits to master: 3240221614: runtime/import: General function cleanup and use darkfi_sdk error codes... : @parazyd pushed 3 commits to master: e340fa6824: sdk: Apply relevant changes related to 3240221614727e7bb754de6b33397dc90a92ddee Let's see if tests pass :p ACTION runs make test locally Make sure to recompile contracts I never updated my vks/pks so we are good XD mosts contracts test seem to pass Most? :D No you need to recompile the wasms Because sdk code changed https://github.com/darkrenaissance/darkfi/blob/master/Makefile#L133 everything passed Kewl : @aggstam pushed 1 commit to master: e829424a9c: sdk/util: added block height retrieval functions and use them at appropriate places brawndo: repo is green again :D haumea: check latest commit, now you don't have to use convoluted terminology :D ^_^ : @aggstam pushed 1 commit to master: 0b63956945: contract/money/error: added missing error code upgrayedd: something we overlooked in the ring buffer vs random peer selection discussion is that the hostlists are ordered by "last seen" with the most recently seen peers at the top of the list i'm not sure about combining the ring buffer with this existing ordering hm let me think that fact that they are ordered tho has nothing to do with selection right? as its still getting a random from that list not rn, but if we add a ring buffer then it impacts yeah it will how/where is this order used? it's used when we fetch addresses to connect to so chosing the ones with the mostly recent last_seen fields first then can't that act as the ring buffer? yeah that's what i'm wondering if it's kinda redundant given this pre-existing ordering well if you always grab the last in that list, its the same as using a ring buffer but backwards since if they respond they will get on top of the list(as last seen whitelisted) if they don't they get demoted so greylist will handle them we were talking about using the ring buffer in the greylist refinery tho why does greylist have a last seen? if they respond they get promoted to whitelist every host list is a Vec nodes send whitelist which is a Vec on receiving the whitelist, nodes add it to their greylist then they ping the entries if the node is active, it is promoted to whitelist with an updated last_seen when for example in peer discovery you receive new nodes to append to greylist to you sort it again? and based on who is the last_seen field there if we do end up chosing connections from greylist (this can happens if we have no whitelist or anchorlist connections) it will chose from the one with the most recent last seen field first us or peer? yes we always sort when we call store() on a greylist (grey, white, anchor) ok then same logic you didn't answer my question tho who sets the last_seen field? us or the peer depends and how is it getting updated, since we don't check whitelisted peers for livenes? just on first attempt to connect to? i am setting the last_seen field of the whitelist and anchorlist hostlists but other nodes set the greylist hostlist (this is their whitelist) s/hostlist/last_seen last seen is set in the greylist refinery wait because the last statement doesn't make sense lemme be more clear is it set only in greylist refinery? because if thats the case, an anchor/whitelist nodes last seen will never get updated if we always connect to it as it will never get downgraded to greylist greylist last seen: sent by other peers, whitelist last seen: set by the greylist refinery (which pings nodes and promotes them to whitelist with an updated last seen), anchorlist: set when we establish a connection to a peer oh yeah you said we always push whitelist to greylist on startup so they get updated by that yep we do that now ok so I would go like: for greylist just keep it random, as it doesn't really matter since the moment we connect they get removed from the list (I assume when in peer discovery we check if received addresses are not already in our lists) for whitelist: since we have the last seen order, you can simply always get the last(oldest) node from the list to check since the order acts as the ring buffer but in reverse we don't check whitelist remember yeah but we said we should or wait no, we just add all the whitelist to the greylist at the end if the channel gets closed we demote them? so it has to go via the refinery no, we never demote, but we do remove anchorlist connections if they disconnect so if a node goes into whitelist, we never move them out while running? even if they disconnect? yes correct well thats wrong it's what monero does we simply take another connection we should demote them when they disconnect why tho? if they go down, why send them to another peer? they go into their refinery so if they are offline, they get removed that's why all recv'd nodes are deemed suspicious and placed on the greylist so we move them from whitelist to grey list s/nodes/hostlists node1 sends faulty whitelist to node2, node2 downgrades to greylist sorry wrong yeah but why keep that info in our whitelist since its disconnected? node2 removes from greylist it doesn't make sense i mean we can remove it, but it's kind of redundant since that's the entire point of the refinery also monero doesn't do it like this yeah but its extra redudant info on both sides we hold info we don't need the other side gets info it will surely fail why have that? maybe monero missed this point monero p2p network is ultra stable what we are discussing is not about stability its about having unneccesary stuff its like me giving someone a milk sweet, knowing they are lactose intolerance k well it's easy to change shit will hit the fat s,fat,fan just seems kinda redudant the extra request the peer will make to a closed node seems kinda redudant to me so if we can prevent that, why not do it? : @lunar-mining pushed 7 commits to net_hostlist: 4bf43ec521: net: downgrade whitelist to greylist on stop... : @lunar-mining pushed 7 commits to net_hostlist: 06ae4fd054: settings: change refinery interval default to 5 seconds... : @lunar-mining pushed 7 commits to net_hostlist: 99d0adc5bb: settings: change default hostlist to .local/darkfi/CARGO_PKG_NAME... : @lunar-mining pushed 7 commits to net_hostlist: 4ded978f06: chore: cargo fmt one last thing how is seed check liveness of advirtised nodes? seed(lilith) is not connected to any of them as they only connect on startup and advertise their address is this still the logic or it changed? lilith does the refinery it's the same as any other node nodes send their address through protocolseed it gets added to the greylist, refinery etc yeah, once moved to whitelist tho, how does lilith know they still alive? its the same problem we keep a list and let the peer handle liveness so lilith list can be total garbage, yet we still send it around thats why we added the purging loop btw, so lilith(seed) don't keep garbage around https://darkrenaissance.github.io/darkfi/arch/p2p-network.html#proposed-update this is the TLDR of the monero impl Title: The DarkFi Book https://eprint.iacr.org/2019/411.pdf this is the main source i was using, plus monero p2p module see section 2.2 'Peer list' so i think a large part of this design is to reduce the pressure on seed nodes but yeah afaik there could be an attack with seed nodes sending bullshit lists which is why monero seed nodes are centralized/ trusted in the monero impl, there is no "whitelist refinery" yeah its not needed since they don't remove garbage from whitelist the problem with what I'm reading, is that they don't define why they don't remove them why should we accept that? our seed nodes are minimal, they don't have pressure and we want full decentralization, so this is not acceptable imho garbage should be always cleaned so if we want to remove them, my proposal would be to do so inside "remove_sub_on_stop" which is a method that waits for a stop event on a channel and removes it from the p2p list of channels if a stop event is received (this is where we currently remove anchorlist entries) (when they disconnect) so nodes when a whitelist node gets dc they move it to their greylist(to try again later) and seed periodically pings nodes to remove them that works yeah nodes should handle the whitelist demotion like they already do in anchors but lilith needs the purging task/loop as its not connected to any of them so basically lilith would implement a "whitelist refinery" yeah, pretty much the purge loop task but instead of ringbuffer, you use the already sorted list and always pick last if they respond they go on top, otherwise yeet ACK btw want me to go full schizo mode? always not removing a disconnect peer ip is against privacy as its similar to telemetry they keep track of who's connected to their seed :D well nodes can set advertise to false if they don't want to share info but i get u it's like "delete my data on exit" exactly! btw doesn't that means that all nodes reports from monero are wrong? I mean if the report is based on their seeds whitelist, its never up to date if they don't check liveness (or removing) lain: btw instead of vectors, wouldn't a hashset> be more optimized? (assuming u64 ascedding order) new entries will always go on last position so you can quickly grab each side with .first() and .last() and you don't need to manually sort them (the vec is because 2 entries might have same last_seen) but I guess the vector does the same job, since last_seen is based on your clock which is always ascedding, so you just push it on front with same effect that's a good point re: appending to the end the reason we opted for a Vec was cos you can't randomly select from a hashset and we required that in the greylist refinery biab, eating bon apetite! gm gm gm gm file:///home/narodnik/src/darkfi/doc/book/spec/crypto-schemes.html#homomorphic-pedersen-commitments ffs https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#homomorphic-pedersen-commitments Title: The DarkFi Book we should maybe customize the personalization : @Dastan-glitch pushed 1 commit to master: 059dd47523: spec: add missing merkle tree section haumea: It might break the gadgets, I'm not sure Orchard is MIT license anyhow I'm not super-clear on how to generate the stuff in src/sdk/src/crypto/constants/fixed_bases/ ok np, it's fine what sort of personal projects do you guys recommend I do in Rust that would be relevant to this project? Should I build my own blockchain? do the tutorial on writing a p2p app then make some p2p collab tools like accounting or a calendar for example the calendar could use calcurse as its frontend ah nice, that's a great idea actually Should rather work with geode to get a filesystem working haven't heard of calcurse but looking it up now I love the look of it Then you can use that for .ics files and you don't have to bother with calendars nice thanks, will look into that too geode https://github.com/darkrenaissance/darkfi/blob/master/src/geode/mod.rs https://github.com/darkrenaissance/darkfi/tree/master/bin/fud/fud/src well geode files would have no knowledge of the structure of the calendar Why would they have to? so operations to update, add items .etc are atomic Just write a FUSE implementation Backed by geode what about conflicts? It's append-only the file is just a series of bytes though but if there's a structure of nodes, then conflicts can often be automatically resolved You build such algorithms on top of geode like if you add an item to the calendar, and i add one, then geode impl doesn't know they are interchangeable Geode is for data storage And for example "fud" is for file storage, which implements logic of files on top of geode You could do something like that for any file type or whatever so i'd use geode for sharing webpages or images on a marketplace but wouldn't you prefer the event graph for the calendar or tasks? No you'd use something on top of geode how is the calendar different from the chat? Geode does not do networking The calendar is different from the chat in the way that it uses a different protocol. You have to manage .ics files eventually if you want to be compatible with existing calendar utilities. the .ics files are exported from calcurse, but the calendar itself is a series of atomic operations you can share directly over the p2p ics files are used for sharing events so what's the purpose of geode? Does it help with p2p file sharing? Maybe you want something like caldav instead deki: Geode is a module for storing data in fixed-size chunks You can plug a p2p protocol on top of it to implement file sharing and routing tables ah I see, well I'll start to look into it soon enough, will likely come back with more questions if you're really ambitious, you can create an anon p2p marketplace - seller pages using fud dht/geode - DMs / listings using darkirc/event graph - escrow is in script/escrow.sage - reviews using rate limit nullifiers ooh I like the sound of that, but isn't that what got Ross Ulricht in jail :\ aren't you looking forwards to some peace and quiet I think I'll try that as a side project after I've done something easier that's p2p lol I suppose came across this on twitter, pretty interesting stuff: zero knowledge ML https://twitter.com/svpino/status/1747247975246528900 what is interesting about it? what is the problem it's solving? using zk proofs in deep fakes, or like he said, which model generated it, or signing legitimate imagery as proof it's real I don't think it's solving a problem, at least not right now so a hospital (his example) wants me to run their ML algo on my data, right? why would they want me to do that instead of just doing it themselves? trying to understand hmm good point, not sure maybe you don't want to release your data? to them yeah but what's the relation to ML? it's just evaluating a function on my data and i don't see what privacy it gives you surely the privacy comes from doing it in aggregate with 1000s of other people and they want to make some statistical measure of the data... which ZK cannot do (you need FHE) yeah that's valid, it does seem like a solution searching for a problem. Only application I can think of is if you want to keep private certain parts of your ML model, whilst allowing other parties to use it for the output so they run a ML model on my data and show me they put my data in the ML model to get the result... but i cannot see what the ML model is so the result could be random unless there's a third party who verified the ML model and it has a special auth signature well yeah if you want to prove the output came from that ML model, wouldn't a zk proof be a valid way to do it? yes it would, but the problem is you cannot access the ML model so how do you verify it? there has to be some kind of auth from a trusted third party I see what you mean ZK and ML/AI are big money topics rn not sure how that could be resolved, other than the trusted 3rd part auth im often wrong on this kinda stuff tho well, a lot of this tech is still new/early that it's hard to say if it will have any applications I think zk proofs are kinda awesome, I've seen them around (like zkSync) but wasn't until someone posted a course link in telegram that I appreciated them yeah they are really good what about FHE? is it going to be used in darkfi? Eventually It's still really slow because it does compute on encrypted data? Yeah A lot of things going on It's multiple gigabytes of RAM needed for relatively simple operations I see : @parazyd pushed 1 commit to master: f9515f3ddc: runtime: Begin implementation of host function gas costs upgrayedd: https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/import/util.rs#L284 There is this call that reads from sled(-overlay) Is there a way to know the value's size without reading it completely into memory? brawndo: it incoves this: https://github.com/darkrenaissance/darkfi/blob/master/src/blockchain/slot_store.rs#L167 so the return is a serialized Slot yy so that is already in memory https://github.com/darkrenaissance/darkfi/blob/master/src/sdk/src/blockchain.rs#L84-L99 I guess no way to know before reading? so the size shoudl be this serialized struct the problem is that its not constant That's already using up the memory as we have some vecs I wanted to know if there is a way to just query the size before reading from the db? Yeah I have to check if sleds support that I don't see something like that nbd if not Just a bit of a spam vector yeah I get it : @parazyd pushed 1 commit to master: c03d48645c: runtime/import: Subtract gas fee in get_slot() You see here ^ Sometimes we know the gas costs in advance so we don't even have to perform the operations But unfortunately not possible from dbs, so we still end up filling the memory before exploding I don't know if we can do some trick like reading the iterator IVec len but don't know if it will load that to memory or not I think IVec is also already allocating aha It says it's a buffer Not a stream true true checking docs now so its a limitation of underlining tech Yeah np we can do an index tree where key = hash, value = size In any case, I experimented with a few things and figured out an ez way to account for host gas so we know the sizes of everything in blockchain So it's a win in the end :D Nah that's overkill aha noice yeah just thinking out loud I did loads of things ACTION hates that corpo lingo Then in the end: https://github.com/darkrenaissance/darkfi/commit/f9515f3ddc90c018d44dd02fcf302b69feb54f22#diff-09de91bd3ba3706233f6be38745e3e20100f1f1717c4506036ec5e85fea1f1ce Title: runtime: Begin implementation of host function gas costs · darkrenaissance/darkfi@f9515f3 · GitHub will check them later right now my pc is overloaded with docker builders lmao (one for ungoogled chromium and one for librewolf) I'll bbl yeah I'm doing some personal maintance have fun I build overnight that was the plan my calculation were good, but damn I'm bad at math XD soz was disconnected !list Topics: 1. funcid in coin (by haumea) hintjens on meetings: https://youtu.be/7HECD3eLoVo?t=703 Title: Pieter Hintjens - How Conway's Law is eating your job?, Opening Keynote at Coding Serbia 2015 - YouTube gm gm gm* gm hey gm : @Dastan-glitch pushed 1 commit to master: 3ac3f314ed: spec: add dao::vote() : @parazyd pushed 1 commit to master: 9fb2febfb3: runtime/import/db: Implement host gas cost for zkas_db_set()... : @lunar-mining pushed 6 commits to net_hostlist: 3d5eabfe59: net: downgrade host if they disconnect or we can't connect to them.... : @lunar-mining pushed 6 commits to net_hostlist: eda5c69af4: lilith: add whitelist_refinery task... : @lunar-mining pushed 6 commits to net_hostlist: f3361db4c4: lilith: change hostlist paths on default config : @lunar-mining pushed 6 commits to net_hostlist: 2674cfd32e: store: do not shuffle hosts on fetch_address()... : @lunar-mining pushed 6 commits to net_hostlist: 576afd574d: store: create test_remove() unit test : @lunar-mining pushed 6 commits to net_hostlist: fed0e582c1: net: bug fixes and cleanup... : @lunar-mining pushed 1 commit to net_hostlist: deb3ea5936: net: cleanup warnings + run make clippy : @parazyd pushed 1 commit to master: a3ed654d3a: runtime/import/db: Apply gas subtraction to remaining db.rs functions : @lunar-mining pushed 1 commit to net_hostlist: 818ceaec4d: store: don't remove from greylist or whitelist on anchorlist upgrade... upgrayedd: i added those things we discussed lilith now does a "whitelist refinery" + we downgrade hosts when we can't connect to them or when they disconnect wen merge lain: soon (TM) XD what's lilith? Think I've seen it mentioned before deki: https://github.com/darkrenaissance/darkfi/tree/master/bin/lilith nice, thanks afk b : @lunar-mining pushed 98 commits to net_hostlist: 1a60b322a0: dchat: renamed dchat to dchatd and add placeholder dchat-cli : @lunar-mining pushed 98 commits to net_hostlist: 9edf44684c: doc: update dchat tutorial chapter 1 : @lunar-mining pushed 98 commits to net_hostlist: dc26084279: doc: add dchat tutorial to SUMMARY : @lunar-mining pushed 98 commits to net_hostlist: e8f93527f9: Cargo.toml: change dchatd directory to example/dchat/dchatd : @lunar-mining pushed 98 commits to net_hostlist: dd11c47af5: doc: fix dchat tutorial chapter2 : @lunar-mining pushed 98 commits to net_hostlist: 2dff107fee: doc: create dchat tutorial chapter 4 and specify TODOs : @lunar-mining pushed 98 commits to net_hostlist: c4d5f90020: doc: finalize dchat tutorial and add TODOs : @lunar-mining pushed 98 commits to net_hostlist: 7285b80600: doc: update SUMMARY with new dchat tutorial flow : @lunar-mining pushed 98 commits to net_hostlist: fafa2c53a4: dchat: add anchors/ fix ports/ uncomment daemon : @lunar-mining pushed 98 commits to net_hostlist: 54ab2c3947: Cargo.lock: update dchat dependencies : @lunar-mining pushed 98 commits to net_hostlist: 5709066f1a: dchat: remove deleted files and add new ones : @lunar-mining pushed 98 commits to net_hostlist: b521300bc9: doc/ dchat: add TODO : @lunar-mining pushed 98 commits to net_hostlist: daeefbb7bf: money: switch to new nullifier scheme N = hash(secret, coin) : @lunar-mining pushed 98 commits to net_hostlist: ebbd88dee3: update Cargo.lock : @lunar-mining pushed 98 commits to net_hostlist: fa71b711a1: spec2: concepts page : @lunar-mining pushed 98 commits to net_hostlist: 25e696e03b: fix & update darkirc test script : @lunar-mining pushed 98 commits to net_hostlist: 9ae3668779: spec2: add section on crypto schemes : @lunar-mining pushed 98 commits to net_hostlist: bbddc4d0b6: spec2: sections on concepts, notation, pallas/vesta : @lunar-mining pushed 98 commits to net_hostlist: d70b133dae: doc: Update .gitignore : @lunar-mining pushed 98 commits to net_hostlist: e2dafa6051: example/dummy-contract: Fix paths : @lunar-mining pushed 98 commits to net_hostlist: b6bb82798a: dchat: Remove stray lines in Cargo.toml : @lunar-mining pushed 98 commits to net_hostlist: 857ebb42bf: chore: Update crate dependencies : @lunar-mining pushed 98 commits to net_hostlist: 77f3c0d079: net: Port from deprecated async-rustls to futures-rustls.... : @lunar-mining pushed 98 commits to net_hostlist: a4b011e93e: book: Fix ZK circuit paths for RLN page : @lunar-mining pushed 98 commits to net_hostlist: 08e1ddc33c: ec: fix small typos : @lunar-mining pushed 98 commits to net_hostlist: d30a0f312c: dao::vote(): add proposal_bulla to the nullifiers : @lunar-mining pushed 98 commits to net_hostlist: 59a819b9aa: remove unmaintained ShareAddress : @lunar-mining pushed 98 commits to net_hostlist: 7240e6251f: chore: Clippy lints : @lunar-mining pushed 98 commits to net_hostlist: f2390ec288: runtime: Remove unused sanity_check() function... : @lunar-mining pushed 98 commits to net_hostlist: 0668ac4606: contract/deployooor: Implement initial client API : @lunar-mining pushed 98 commits to net_hostlist: cb9f095e9d: contract/deployooor: Include deployment instruction payload in params : @lunar-mining pushed 98 commits to net_hostlist: 55f72d0956: validator: Implement contract deployment handling upon tx verification : @lunar-mining pushed 98 commits to net_hostlist: 9d2671d9c7: validator: Move DeployParamsV1 to darkfi-sdk : @lunar-mining pushed 98 commits to net_hostlist: 80b650e4f5: contract/deployooor: Add initial integration test : @lunar-mining pushed 98 commits to net_hostlist: b0ba8b7d3d: validator: Deploy the deployooor contract as a native contract : @lunar-mining pushed 98 commits to net_hostlist: ed892ea991: contract/deployooor: Update API to use DarkLeaf for contract calls : @lunar-mining pushed 98 commits to net_hostlist: 099e2e72ba: contract: Strip built WASM binaries using wasm-strip from the wabt toolkit : @lunar-mining pushed 98 commits to net_hostlist: 4ec1daa589: chore: Enable some additional arti-client crate features... : @lunar-mining pushed 98 commits to net_hostlist: b10b078147: runtime: Disable payload debug message on Deploy : @lunar-mining pushed 98 commits to net_hostlist: 494da41475: contrib/dependency_setup.sh: wabt dep dependency added for xbps : @lunar-mining pushed 98 commits to net_hostlist: a318bb3d76: validator: check if proposal already exists when looking ofr its fork index... : @lunar-mining pushed 98 commits to net_hostlist: fd177220e1: spec2: DAO model : @lunar-mining pushed 98 commits to net_hostlist: 901793bb79: spec2: vote nullifiers and finish dao model page : @lunar-mining pushed 98 commits to net_hostlist: 690b747b26: contract/test-harness: Set fixed-difficulty=1 mining : @lunar-mining pushed 98 commits to net_hostlist: 9a2505c9e1: zk/debug: add Uint32 and MerklePath for export_witness_json() : @lunar-mining pushed 98 commits to net_hostlist: 063a03e892: dao::vote(): correct mistake in nullifier : @lunar-mining pushed 98 commits to net_hostlist: 68f08077f8: ci: Install wabt for book gen : @lunar-mining pushed 98 commits to net_hostlist: 870fd3e246: dao: replace use of blake3 hash with blake2b. See code comments for explanation of the rationale : @lunar-mining pushed 98 commits to net_hostlist: 62b9cdc04c: dao model: add note about blake2 hash function usage : @lunar-mining pushed 98 commits to net_hostlist: 4d411b0934: book: git mv spec2 spec : @lunar-mining pushed 98 commits to net_hostlist: fecd412ecf: book: botched move : @lunar-mining pushed 98 commits to net_hostlist: 7a6b0a5203: contract/money: Ignore benchmark tests when running test units : @lunar-mining pushed 98 commits to net_hostlist: d24248d026: contract/money: Rename "integration" test to "token_mint" : @lunar-mining pushed 98 commits to net_hostlist: ecc5c6ae11: contract/money: WIP complete integration test : @lunar-mining pushed 98 commits to net_hostlist: 9c18ec3446: contract/money: Fix slot generation in integration test : @lunar-mining pushed 98 commits to net_hostlist: 2f9a9cc237: contract/money/integration: Gather block reward owncoins : @lunar-mining pushed 98 commits to net_hostlist: 31d0d2f617: validator: Configurable fee verification, incomplete... : @lunar-mining pushed 98 commits to net_hostlist: daa625d856: validator:pow: decoulbed mine_block() from PowModule so it can be used outside of it : @lunar-mining pushed 98 commits to net_hostlist: 4fffd4ac2c: script/research/minerd: miner daemon skeleton : @lunar-mining pushed 98 commits to net_hostlist: 54a2674717: script/research/minerd: handle new request trigger using smol channels : @lunar-mining pushed 98 commits to net_hostlist: efe6f39041: darkfid2: use minerd to mine blocks, validator: cleaned up threads info as its not longer required : @lunar-mining pushed 98 commits to net_hostlist: 2074625d1d: src/event_graph: request and reply multiple events : @lunar-mining pushed 98 commits to net_hostlist: 2fe38d699c: remove unused import : @lunar-mining pushed 98 commits to net_hostlist: cda4521dad: spec: change from blake3 to blake2b and add explainer why : @lunar-mining pushed 98 commits to net_hostlist: 111c803085: spec: DAO::mint() and DAO::propose() : @lunar-mining pushed 98 commits to net_hostlist: c83347857e: spec: reword section on blake2 : @lunar-mining pushed 98 commits to net_hostlist: 77fb9d4321: src/event_graph: aquire locks outside loops : @lunar-mining pushed 98 commits to net_hostlist: 9fd2e2e467: make DAO nullifiers the same as money, otherwise we can't detect whether the coins we're using were already spent or not. Having access to a set non-membership merkle tree here would fix this. : @lunar-mining pushed 98 commits to net_hostlist: 493fbfe1eb: spec: DAO::propose() nullifier : @lunar-mining pushed 98 commits to net_hostlist: ce5a92ff21: spec: add money contract with money transfer : @lunar-mining pushed 98 commits to net_hostlist: d67c3d2029: spec: DerivePubKey() : @lunar-mining pushed 98 commits to net_hostlist: d42ba0d511: contract: Move POW_REWARD constant to money contract : @lunar-mining pushed 98 commits to net_hostlist: ab044b02fd: contract/test-harness: Update VKS and PKS checksums : @lunar-mining pushed 98 commits to net_hostlist: 5d8e4a5451: Revert "contract: Move POW_REWARD constant to money contract"... : @lunar-mining pushed 98 commits to net_hostlist: d7ef5c25e1: contract/money/integration: Assert expected PoW reward : @lunar-mining pushed 98 commits to net_hostlist: 8e5e997426: contract/test-harness: Include Money::FeeV1 zk circuit for cached vks : @lunar-mining pushed 98 commits to net_hostlist: 8087222a86: validator/verification: Allow fee call at any place in the transaction : @lunar-mining pushed 98 commits to net_hostlist: c7248f44f6: darkfid2: parse network config directly from the config file, not as flattened arg : @lunar-mining pushed 98 commits to net_hostlist: e46c6dbb6b: sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64() : @lunar-mining pushed 98 commits to net_hostlist: be4898e0c7: Revert "sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64()"... : @lunar-mining pushed 98 commits to net_hostlist: 6b9ea039d7: spec: add money coin, current day, pedersen commits : @lunar-mining pushed 98 commits to net_hostlist: 4cd92a8189: dao/spec: rename all mentions of slot to blockheight : @lunar-mining pushed 98 commits to net_hostlist: 639b4e89df: test-harness: s/slot_to_day/blockheight_to_day/ : @lunar-mining pushed 98 commits to net_hostlist: d3fae80d5a: chore: Update copyright year in license headers : @lunar-mining pushed 98 commits to net_hostlist: 3d36a1b382: contract/money/Cargo.toml: added missing darkfi validator feature : @lunar-mining pushed 98 commits to net_hostlist: 5b104ef6a5: runtime: Minor comment cleanups and log verbosity. : @lunar-mining pushed 98 commits to net_hostlist: e16e5ceed9: runtime/import: Enable pages assertion in put_object_bytes() : @lunar-mining pushed 98 commits to net_hostlist: 0d67df0adb: runtime/import: General function cleanup and use darkfi_sdk error codes... : @lunar-mining pushed 98 commits to net_hostlist: 703666623f: sdk: Apply relevant changes related to 3240221614727e7bb754de6b33397dc90a92ddee : @lunar-mining pushed 98 commits to net_hostlist: eb2dc41dd7: sdk/util: added block height retrieval functions and use them at appropriate places : @lunar-mining pushed 98 commits to net_hostlist: 1b19b54099: contract/money/error: added missing error code : @lunar-mining pushed 98 commits to net_hostlist: 764da6e7c8: spec: add missing merkle tree section : @lunar-mining pushed 98 commits to net_hostlist: 4f97ed6989: runtime: Begin implementation of host function gas costs : @lunar-mining pushed 98 commits to net_hostlist: ab3a55c6fe: runtime/import: Subtract gas fee in get_slot() : @lunar-mining pushed 98 commits to net_hostlist: 929166b412: spec: add dao::vote() : @lunar-mining pushed 98 commits to net_hostlist: 8a10b292c3: runtime/import/db: Implement host gas cost for zkas_db_set()... : @lunar-mining pushed 98 commits to net_hostlist: fe28c86fa3: runtime/import/db: Apply gas subtraction to remaining db.rs functions looks scarier than it is lol just updating the remote so i merged master in order to fix the failing 'cargo check', but now the master commits have been echoed into the pull request thought it would just show diff https://stackoverflow.com/questions/16306012/github-pull-request-showing-commits-that-are-already-in-target-branch Title: git - GitHub pull request showing commits that are already in target branch - Stack Overflow tried the solution here but it didn't work anyway, i think this is ready to merge i'd like to move onto other tasks lmk thoughts : @aggstam pushed 1 commit to master: 4c8dab8204: drk: drk rewritte skeleton added gm afk hey : @Dastan-glitch pushed 1 commit to master: 75484eb7e4: spec: DAO::exec() : @Dastan-glitch pushed 1 commit to master: 1061873008: spec: rename coin params to coin attrs brawndo: i'm getting dao make test failing with merkle_add_ result being i64 instead of i32. I checked all the defns across the code, and they all return i64. I also removed target and .wasm codes and rebuilt debug!(target: "runtime::vm_runtime", "Instantiating module"); let instance = Arc::new(Instance::new(&mut store, &module, &imports)?); this is where it fails in vm_runtime.rs. i also don't see anything in imports about the return value, and some of those methods return i64 haumea: Which test fails specifically? pipiline tests passed fyi so its a it doesn't work on my machine situation XD My favorite one the DAO integration test Error: WasmerInstantiationError("Error while importing \"env\".\"merkle_add_\": incompatible import type. Expected Function(FunctionType { params: [I32, I32], results: [I32] }) but received Function(FunctionType { params: [I32, I32], results: [I64] })") i'm looking all over the code but don't see anything. i thought maybe it's a build artifact, but i don't see what it could be i did git bisect and traced it to 3240221614727e7bb754de6b33397dc90a92ddee It seems that something didn't recompile i'm cloning a new repo to test ah damn it works wtf :) aha i had to delete all the wasm files, not just DAO ones I think running `clean` from the main Makefile would sort you out Or distclean even ok thanks btw idk if this is recommended: darkfi/src/contract/dao/proof/dao-auth-money-transfer-enc-coin.zk:26 or how i should import pubkeys into .zk Yeah that's fine Alternatively you can just witness its coordinates separately But then I suppose you can't do DH It's ok, I wouldn't call it an UGLY HACK haha More like casting without syntactic sugar :D lol ok cool thanks can i add a python macro preprocessor to .zk files for the DAO? i have a lot of code duplication going on : @parazyd pushed 1 commit to master: 5aad1deb73: runtime/import/merkle: Account for gas costs in merkle_add() haumea: What would that look like? probably just jinja macros https://ttl255.com/jinja2-tutorial-part-5-macros/ Title: Jinja2 Tutorial - Part 5 - Macros | ugh ugly yeah true Why don't you just implement functions in zkas that unroll on compile time? nbd don't want to get too distracted, trying to finish the spec in time Code duplication is a non-issue That's a future optimisation sure I wanted to implement functions and loops in zkas eventually but it'd take too much time right now It can just be done through syntax really The compiler can just unroll it all So you don't get really messy either well it might be a bad idea to just add that adhoc to that toolchain Why do you think so? because ideally the zk proofs would be written inline with the verification logic, and have a common format for defining types right now you have to look in a .zk file, then go to the relevant section in process_instruction(), and then make sure things match in get_metadata() and the client code whereas in the crypto papers it's usually written in terms of wallet does this, verifier checks this and also does this check in the proof Yeah well you'd do the same implementation like the zkvm, but it would be executed natively as opposed to halo2/zk "Verification" is an ambiguous word though yeah i just mean the model. what we have is fine right now, just the next layer tooling will do more heavy lifting, but we don't need to add much to what we have already (so you write a kind of type that works across zk and wasm, and it gets compiled to relevant stuff for both) and then the ZK logic is done in some kind of block, like unsafe { ... } in rust Yeah perhaps Anyway I finished the host gas costs for imported wasm functions, now (almost) everything accounts for gas Although not perfectly But we should now be able to have proper fees nice, how do you want to do pricing? by running benchmarks? I went with every wasm opcode = 1 gas, and the host functions mostly account for 1 byte read = 1 gas and 1 byte written = 1 gas I don't have a clear idea on correct pricing we need data, and then we can create a model rn we have a hardcoded gas limit of 400M Our denomination is 10^8 = 1DRK I suppose we want to keep fees under 1 DRK most times? On Solana fees are small, like 0.0001 or so Maybe a bit more these days with more complex contracts We could divide the spent gas by some number and have that as the required fee each DRK is $0.06 likely more eventually I'm not talking about USD value That's irrelevant at this point I think so 0.1 DRK might be good to start we don't have much (any) optimizations to the blockchain, but traffic will start slow For example DAO::propose(): [WASM] Gas used: 15422191/400000000 however it will become a much bigger problem with usage 84577809 the tx with Exec is the most expensive, since it does 3 calls (transfer, auth, exec) >>> 84577809 / 10**8 0.84577809 sry nvm this lol >>> 15422191/400000000 0.0385554775 0.15422191 so 4% of the gas limit ^ This is the DRK cost of dao propose 0.154 that's reasonable If we would go 1gas=1 denom This does not account for zk or signatures yet btw ah yeah... both expensive ops https://github.com/darkrenaissance/darkfi/blob/master/src/consensus/fees.rs I had this as a placeholder, but I don't know if the pricing is right i think you can just price it based off number of columns and rows And signatures should probably be: tx_size * n_sigs we can look at the zcash cost estimator I don't know if we know columns/rows in advance Lemme check the VK struct what does pricing reflect? like how do we prefer CPU, memory, data written, bandwidth is it purely the price in terms of CPU cycles? Computational effort in general needs to be precise It's CPU and data I'll ask re: columns and rows in a proof 11:43 And signatures should probably be: tx_size * n_sigs Does this look right? : @Dastan-glitch pushed 1 commit to master: b1da730489: dao: apply same verifiable encryption for other outputs, to the DAO change output as well. i think it's more like (tx_size + C) * n_sigs C is a fixed cost per signature verification : @parazyd pushed 1 commit to master: 7b42b1c1e2: validator/verification: Fix tx fee call index Right yeah btw we can return [i64; N] from wasm functions (just confirmed) so we don't need to stuff multiple values in a single i64 or anything like that ah cool I don't think we are doing that anywhere right now, but good to know I fixed up all the error code stuff as well yeah i saw that, thanks there's some stuff in SDK error handling where we use i64::MIN + error value, and anything >0 for return values so that potentially could be changed to [i32; 2] instead if we thought it was less error prone Yeah it could but that reminds me too much of golang lol never used it I think it's fine to follow this style, where <0 is error golang sounds like erlang Yeah people called it errlang (sic) as a meme foo, err := do_something() if err != nil { log.Fatal(err) } You always have to check errors like this a lot of go stuff in cryptocurrency projs, mainly when they aren't crypto heavy Some of the first snarks were done in go https://github.com/Consensys/gnark Title: GitHub - Consensys/gnark: gnark is a fast zk-SNARK library that offers a high-level API to design circuits. The library is open source and developed under the Apache 2.0 license I like Go a lot actually But maybe I'm peabrain https://pbs.twimg.com/media/EEgrCUcWsAIs2QN?format=jpg&name=small Although "I want no local storage near me" was one of the key ideas in plan9 https://vadosware.io/post/how-and-why-haskell-is-better/ Title: How and why Haskell is better (than your favorite $LANGUAGE) i read this the other day The utopia being you can just use any computer and access your files as you would normally that would be amazing Yeah It'd be like phone booths You could insert a coin and make a call from anywhere to anywhere The same thing scales to files, providing that infrastructure is perfect But we live in a world with too many adversaries how did we end up with google dox unreal UI UI assimilates masses Like how everyone has an iphone Like how AirBNB killed Booking.com Like how it was never the year of the Linux desktop this year looks promising XD linux is making a comeback 4% i was watching a video with some normies talking about this mac plugin to tile your windows lol they thought it was incredible Good UI will get us far 'productivity hack' We need to get that simple and functional mad how everybody prefers matrix over irc Everyone hates metamask now many software projs moving to matrix instances i hate elements, and the irc integration sucks Discord still seems king there (weechat-matrix plugin) Discord is IRC btw yeah but even for free software projs Yeah lol weird i didnt know that twitch.tv chat is also IRC how do they add all the emojis and do multilined messages? UI And protocol extensions unicode chars :D https://ircv3.net/#what-were-working-on Title: Welcome - IRCv3 https://kiwiirc.com/ Title: KiwiIRC - The webIRC client https://thelounge.chat/ Title: The Lounge There's a bunch You see how the latter is very similar to discord app that ircv3 link is great Yeah though there was backlash from that since it's also involved people tho did the freenode takeover https://irc.com/ Title: IRC.com This is by Private Internet Access and that Andrew Lee dude ah a lot of stuff is gone from that site the lounge doesn't have multi-line messages *shrug* main extensions to irc should be: 1. async insert of messages for p2p 2. images 3. multi-line (for example code snippets) but tbh i only care about #1 1. append-only 2. links 3. pastebin :p links are insecure >t. https://xkcd.com/1782/ Title: xkcd: Team Chat No less than arbitrary file upload And less of a liability than having to host questionable files just for images, rather than clicking arbitrary links or websites : @parazyd pushed 1 commit to master: 61dee47ad1: tx: Improve log messages reading through the previous convo, how do you determine stuff like gas limit of 400M, or determining fees and the likes? Is it based off what others are doing, or is there some general guide? It's totally artificial and arbitrary at this point We need to see some real world usage and do benchmarks ah right, is therea formula to follow though? You charge for "time" essentially The longer something takes to compute, the more resources it takes And therefore should be more expensive I think all of it can be thought of as "time" in the end Even the data you read/write from databases interesting is that how these L2s manage to bring down fees, because they're faster? Like Starknet for example I honestly don't know Might be a good research topic I will check it out so basically fees are lower because transactions are done off the main L1 chain, or they can be batched together as a single transaction Starknet has some great documentation for this stuff if you're looking for inspiration: https://docs.starknet.io/documentation/architecture_and_concepts/Network_Architecture/fee-mechanism/ Title: Gas and transaction fees :: Starknet documentation Thanks, I'll be sure to read that :) : @parazyd pushed 1 commit to master: f733f094e7: validator: Account for Schnorr signature verification for fees in txs :) : @parazyd pushed 1 commit to master: 36315f09fd: validator: Add TODO note about RAM usage for circuit VKs regarding the UI/UX talk, I think the dark.fi website is awesome in that regards, especially the colours used (green light in a dark setting) and finishing the manifesto with 'let there be dark'. If you can carry that all over it will attract the right people Yeah that is the general aesthetic of choice it's what drew me in initially, well that and someone posted a video of dark wallet from 2013, which I found inspiring which lead me to the project : @aggstam pushed 1 commit to master: 8af99afe71: drk2: initialize schemas brawndo: ^^ its comming together :D Sweet btw I find this nice for browsing sqlite dbs: https://sqlitebrowser.org/ Title: DB Browser for SQLite Not as cumbersome as the CLI sometimes brawndo: yeah I use that :) yeah we use that at work too what browser do you all recommend? browser as in web browser? yes sorry, meant web browser None lol They're all bad ^^ XD yikes lol librewolf build from source what about brave? It's meant to be privacy focused, no idea if it is tho lol nope https://ircv3.net/specs/extensions/server-time Title: `server-time` Extension - IRCv3 looks like we can support inserting messages into the buffer directly see supported clients list haumea: or you can use a plumbing script with weechat? i don't think so no you add it your terminal st supports its to view historic messages in weechat? to view images wdym historic messages? in darkirc we do a replay, but ircv3 allows you to put timestamps on PRIVMSG : @parazyd pushed 2 commits to master: 3cb838daeb: validator/fees: Add gas use calculator for ZK circuits : @parazyd pushed 2 commits to master: 02e1885d40: validator/verification: Account for ZK proof verification cost when verifying tx fee oh wait, I read inserting images not messages XD but the plumbing usage can still be added haumea: The timestamps are a client thing and the client chooses how to render it also editing/deleting messages is being added https://github.com/ircv3/ircv3-specifications/pull/524 haumea: We also already have the timestampes attached to IRC messages through the event graph Title: Add message redaction by progval · Pull Request #524 · ircv3/ircv3-specifications · GitHub Editing and deleting is also part of the client rendering, it's just certain types of messages with attached metadata The important thing is backwards compatibility with existing clients where are the timestamps? i don't see it in the code you mean in PRIVMSG? https://github.com/darkrenaissance/darkfi/blob/master/src/event_graph/event.rs#L34-L35 i'm talking about darkirc/src/irc/ Extract it from the event It's not supposed to be in PRIVMSG yes it is, the ircv3 spec has it there IRCv3 is not a real spec it's widely supported by many clients https://ircv3.net/specs/core/capability-negotiation Don't distract yourself interesting, weechat has it too gna merge net_hostlist today unless anyone has further comments bbl cya gm lain: Go for it :) :D b : @lunar-mining pushed 12 commits to net_hostlist: f5a9cf3e96: drk: drk rewritte skeleton added : @lunar-mining pushed 12 commits to net_hostlist: 279fdf6a5f: spec: rename coin params to coin attrs : @lunar-mining pushed 12 commits to net_hostlist: 7e1795b6e8: dao: apply same verifiable encryption for other outputs, to the DAO change output as well. : @lunar-mining pushed 12 commits to net_hostlist: 317e443b7d: validator/verification: Fix tx fee call index : @lunar-mining pushed 12 commits to net_hostlist: bb5b015a00: tx: Improve log messages : @lunar-mining pushed 12 commits to net_hostlist: bf43e8f77d: validator: Account for Schnorr signature verification for fees in txs : @lunar-mining pushed 12 commits to net_hostlist: f2ef873e32: validator: Add TODO note about RAM usage for circuit VKs : @lunar-mining pushed 12 commits to net_hostlist: 0800757693: drk2: initialize schemas : @lunar-mining pushed 12 commits to net_hostlist: 2fdcc78b17: validator/fees: Add gas use calculator for ZK circuits : @lunar-mining pushed 12 commits to net_hostlist: d23f93c1cf: validator/verification: Account for ZK proof verification cost when verifying tx fee : @lunar-mining pushed 228 commits to master: 090e8fddfd: hosts: add probe_node() method : @lunar-mining pushed 228 commits to master: 80d6eae22e: hosts: create methods to store hosts in greylist after version exchange and periodically probe_nodes, whitelisting them if responsive : @lunar-mining pushed 228 commits to master: 82da1ef2bb: hosts: if lists reach max size, remove the oldest entry from the list. : @lunar-mining pushed 228 commits to master: 240263cf84: hosts: remove whitelisted peers from the greylist and improve random greylist selection process : @lunar-mining pushed 228 commits to master: e4b366cf68: hosts: write test and convenience methods : @lunar-mining pushed 228 commits to master: b13ecb2811: hosts: create store_greylist() and store_whitelist() methods and tests : @lunar-mining pushed 228 commits to master: a74557131b: outbound_session: create run2() method that changes run() behavior to new whitelist protocol.... : @lunar-mining pushed 228 commits to master: f3b71f4fdc: net: call refresh_greylist() inside outbound_session::run()... : @lunar-mining pushed 228 commits to master: d7d80b6f11: net: implement a new ProtocolAddr that sends addrs from the whitelist and receives to greylist... : @lunar-mining pushed 228 commits to master: 00fdaaa0ea: hosts: reimplement test_greylist_store() : @lunar-mining pushed 228 commits to master: c074282301: net: remove channel from the whitelist and add to the greylist if we fail to establish a connection. : @lunar-mining pushed 228 commits to master: efe3ca7214: net: move whitelist_fetch_address_with_lock() to hosts, and change whitelist_downgrade() function call to take an url, not an (addr, u64) : @lunar-mining pushed 228 commits to master: 7952b8ad41: lilith: store last_seen in host list. also change outbound_session to run new protocol : @lunar-mining pushed 228 commits to master: 9a09e8c6cd: net: remove HostPtr from ProtocolVersion and update probe_node() : @lunar-mining pushed 228 commits to master: 03ce1324bd: net: ProtocolSeed stores addrs on the greylist, and broadcasts own address with last_seen.... : @lunar-mining pushed 228 commits to master: 2c01db5270: net: migrate outbound sessions over to new protocol. also replace lilith periodic_purge with periodic_cleanse.... : @lunar-mining pushed 228 commits to master: a6c74eda87: net: migrate to new AddrMessage format : @lunar-mining pushed 228 commits to master: 053cb71a52: net: move refresh_greylists() out from hosts and implement GreylistRefinery struct/ process in outbound session... : @lunar-mining pushed 228 commits to master: 4f0c4cdc0a: net/lilith: move refresh_whitelist() process out of hosts and back into lilith. : @lunar-mining pushed 228 commits to master: a19e20e006: net: cleanup : @lunar-mining pushed 228 commits to master: 3c30fe64ab: net: properly integrate GreylistRefinery in outbound session : @lunar-mining pushed 228 commits to master: 406a37bbb4: net: only run GreylistRefinery if the greylist is not empty. also properly initalize Weak : @lunar-mining pushed 228 commits to master: cf3642fb3a: net/ lilith: change last_seen to use UNIXEPOCH instead of SystemTime : @lunar-mining pushed 228 commits to master: 491ad0a318: net: add debug statements : @lunar-mining pushed 228 commits to master: 9f25cb4f10: net/ settings: add "advertise" to settings (default value = true) : @lunar-mining pushed 228 commits to master: a7b4f60af4: net: implement ping_node() in OutboundSession and ping self before sending own address in ProtocolAddr, ProtocolSeed... : @lunar-mining pushed 228 commits to master: 519353ae42: hosts: fix minor typo : @lunar-mining pushed 228 commits to master: cc602048d6: net: standardize format + fix logic error on protocol_seed, protocol_address self_my_addrs() : @lunar-mining pushed 228 commits to master: 175f6e78a1: net: commit working test : @lunar-mining pushed 228 commits to master: c0a47457f8: net: BUGFIX: stop duplicate entries in greylist... : @lunar-mining pushed 228 commits to master: de743a03b6: lilith: remove all peerlist filtering logic : @lunar-mining pushed 228 commits to master: d4541d4315: net: fix typo in protocol/mod.rs documentation : @lunar-mining pushed 228 commits to master: 066d3dc9c5: net: fix warnings and cargo fmt : @lunar-mining pushed 228 commits to master: 748c659f93: lilith: fix warnings : @lunar-mining pushed 228 commits to master: de2fb840bf: net: invoke GreylistRefinery in p2p.rs and cleanup : @lunar-mining pushed 228 commits to master: 0639e9bdf7: net: working greylist protocol... : @lunar-mining pushed 228 commits to master: ae5b4d0a69: net/store: reimplement test_greylist_store() : @lunar-mining pushed 228 commits to master: c4ebcb3d45: net: reimplement address filtering on greylist_store().... : @lunar-mining pushed 228 commits to master: a61a08c020: net: remove whitelist_store_or_update call from OutboundSession... : @lunar-mining pushed 228 commits to master: 80df5b68b5: net/hosts: Add missing mod.rs : @lunar-mining pushed 228 commits to master: ef3b95ffdf: net: avoid adding our own address to the greylist when on localnet... : @lunar-mining pushed 228 commits to master: e693b48cb5: net: clean up reference/ pass by value usage in store.rs : @lunar-mining pushed 228 commits to master: 845b9ded6b: net: fix tests on store.rs : @lunar-mining pushed 228 commits to master: 1a282a951d: net: call channel.stop() when we get a handshake error on ping_node : @lunar-mining pushed 228 commits to master: d591fac8dc: net: remove whitelist_downgrade() from outbound_session (monero doesn't do this) : @lunar-mining pushed 228 commits to master: 560b332e37: net: create perform_local_handshake which does a version exchange without adding channel to the p2p store, and use in ping_node : @lunar-mining pushed 228 commits to master: 065f254661: lilith: comment out broken load_hosts code and add FIXME note : @lunar-mining pushed 228 commits to master: 3725de07ec: net: and anchorlist and minimal utilities. also clarify hosts specific TODOs. : @lunar-mining pushed 228 commits to master: ebe8eb1626: net: check whether host is in the peerlist before adding to greylist. also make additional anchorlist utils.... : @lunar-mining pushed 228 commits to master: 03ae65956a: net: add peer to the anchorlist with an updated last_seen when we call p2p.store() on a connected channel : @lunar-mining pushed 228 commits to master: b5bf749fe9: net: replace outbound connection loop with monero grey/white/anchor connection_maker()... : @lunar-mining pushed 228 commits to master: c850f629b8: net: cleanup connect loop code reuse by implement connect_slot() method. also prevent infinite loop by doing peer discovery when the hostlist is empty.... : @lunar-mining pushed 228 commits to master: 6a39e926f1: net: prevent inbound session channels from being stored in the anchorlist : @lunar-mining pushed 228 commits to master: 0096f778c6: net: improve outbound_session connection loop logic. : @lunar-mining pushed 228 commits to master: 03e6e99e90: net: move host selection logic back into hosts/store to avoid insane nesting in outbound session loop : @lunar-mining pushed 228 commits to master: 5be6a07c61: net: add save_hosts() and load_hosts() methods and invoke on greylist refinery start and stop : @lunar-mining pushed 228 commits to master: b456d8f5ec: lilith: remove load and save host functionality (made redundant by greylist upgrade) : @lunar-mining pushed 228 commits to master: d15cc3b2bd: net: read hostlist path from Settings. Define a default setting and allow overriding in config : @lunar-mining pushed 228 commits to master: 995ff6f6c2: lilith: add hostlist path to NetInfo and default config : @lunar-mining pushed 228 commits to master: 6e8671d5b0: net: create greylist_refinery_interval in net::Settings and update TODOs : @lunar-mining pushed 228 commits to master: ca4d523dd3: net: remove unwrap()'s and cleanup : @lunar-mining pushed 228 commits to master: a555f2e744: net: add anchor_connection_count and white_connect_percent to Settings and cleanup : @lunar-mining pushed 228 commits to master: 51b4263a93: net: remove connection from anchorlist when it disconnects and cleanup.... : @lunar-mining pushed 228 commits to master: 873cd35e0e: net: add hostlist documentation : @lunar-mining pushed 228 commits to master: 4d4392f9e8: net: add test module to mod.rs : @lunar-mining pushed 228 commits to master: 07c2d667e1: session: remove redundant anchorlist write... : @lunar-mining pushed 228 commits to master: ca885a43ee: store: improve error naming... : @lunar-mining pushed 228 commits to master: 2696290aad: store: fix logic on is_empty_hostlist()... : @lunar-mining pushed 228 commits to master: 1578138e8f: outbound_session: move fetch_address logic into new function : @lunar-mining pushed 228 commits to master: 18479be298: test: add seed node to net/test.rs : @lunar-mining pushed 228 commits to master: 426efdf90b: chore: cargo fmt : @lunar-mining pushed 228 commits to master: 27d1b3aa03: hosts: fix logic on anchorlist_fetch_with_schemes... : @lunar-mining pushed 228 commits to master: 5f00598c12: outbound_session: remove peer from anchor or whitelist when try_connect fails : @lunar-mining pushed 228 commits to master: ad3675eb3c: store: fix death loop... : @lunar-mining pushed 228 commits to master: 79e9039b9b: store: create test_fetch_anchorlist() unit test : @lunar-mining pushed 228 commits to master: 0ac12ff19d: store: add test_fetch_address() unit test : @lunar-mining pushed 228 commits to master: e472003b6d: outbound_session: fetch_address() logic bug fix... : @lunar-mining pushed 228 commits to master: 4cde069c53: store: document and cleanup : @lunar-mining pushed 228 commits to master: fcf5a87a28: net: fix deadlock (partial fix)... : @lunar-mining pushed 228 commits to master: 96cad54d81: net: 99.9999% of the time it works 100% of the time... : @lunar-mining pushed 228 commits to master: 736459aa51: chore: cleanup... : @lunar-mining pushed 228 commits to master: 2dbaf413a0: protocol_seed: fix bool syntax : @lunar-mining pushed 228 commits to master: c7cf7d861d: lilith: change no hostlist warning to fatal panic : @lunar-mining pushed 228 commits to master: 639f1f72bf: store: fix and simplify tests : @lunar-mining pushed 228 commits to master: 40619581cd: store: reduce LOC in hostlist queries and update usage.... : @lunar-mining pushed 228 commits to master: 765bd819b2: net: change unwrap() to expect() on hostlist queries : @lunar-mining pushed 228 commits to master: 3f51d80438: chore: fix test fixes : @lunar-mining pushed 228 commits to master: b38a1267fb: store: remove redundant else clauses : @lunar-mining pushed 228 commits to master: e40405a257: store: bug fix... : @lunar-mining pushed 228 commits to master: f08ce9a4c8: chore: fix comment positionning on manual_session : @lunar-mining pushed 228 commits to master: 3abd2c62bb: net: don't hide connection upgrade inside perform_handshake_protocols()... : @lunar-mining pushed 228 commits to master: fb4306e1e4: store: fix logic error in greylist_store_or_update... : @lunar-mining pushed 228 commits to master: bd0c7684c8: outbound_session: replace downgrade_host() with `rejected` vector... : @lunar-mining pushed 228 commits to master: 5f83327aec: chore: delete unused methods : @lunar-mining pushed 228 commits to master: 4f4e4fb5b3: net: small integration test tweaks : @lunar-mining pushed 228 commits to master: ec5abf9683: net: make clippy + fix test : @lunar-mining pushed 228 commits to master: c0e23dca86: net: fix ports on test : @lunar-mining pushed 228 commits to master: 4bf43ec521: net: downgrade whitelist to greylist on stop... : @lunar-mining pushed 228 commits to master: 06ae4fd054: settings: change refinery interval default to 5 seconds... : @lunar-mining pushed 228 commits to master: 99d0adc5bb: settings: change default hostlist to .local/darkfi/CARGO_PKG_NAME... : @lunar-mining pushed 228 commits to master: b5119dff94: Revert "net: downgrade whitelist to greylist on stop"... : @lunar-mining pushed 228 commits to master: 388a190d49: store: save whitelist entries on the greylist on stop (no locks)... : @lunar-mining pushed 228 commits to master: 8d5963961b: refinery: stop refinery process before saving the hostlist : @lunar-mining pushed 228 commits to master: 4ded978f06: chore: cargo fmt : @lunar-mining pushed 228 commits to master: 3d5eabfe59: net: downgrade host if they disconnect or we can't connect to them.... : @lunar-mining pushed 228 commits to master: eda5c69af4: lilith: add whitelist_refinery task... : @lunar-mining pushed 228 commits to master: f3361db4c4: lilith: change hostlist paths on default config : @lunar-mining pushed 228 commits to master: 2674cfd32e: store: do not shuffle hosts on fetch_address()... : @lunar-mining pushed 228 commits to master: 576afd574d: store: create test_remove() unit test : @lunar-mining pushed 228 commits to master: fed0e582c1: net: bug fixes and cleanup... : @lunar-mining pushed 228 commits to master: deb3ea5936: net: cleanup warnings + run make clippy : @lunar-mining pushed 228 commits to master: 818ceaec4d: store: don't remove from greylist or whitelist on anchorlist upgrade... : @lunar-mining pushed 228 commits to master: 1a60b322a0: dchat: renamed dchat to dchatd and add placeholder dchat-cli : @lunar-mining pushed 228 commits to master: dc26084279: doc: add dchat tutorial to SUMMARY : @lunar-mining pushed 228 commits to master: e8f93527f9: Cargo.toml: change dchatd directory to example/dchat/dchatd : @lunar-mining pushed 228 commits to master: dd11c47af5: doc: fix dchat tutorial chapter2 : @lunar-mining pushed 228 commits to master: 2dff107fee: doc: create dchat tutorial chapter 4 and specify TODOs : @lunar-mining pushed 228 commits to master: c4d5f90020: doc: finalize dchat tutorial and add TODOs : @lunar-mining pushed 228 commits to master: 7285b80600: doc: update SUMMARY with new dchat tutorial flow : @lunar-mining pushed 228 commits to master: fafa2c53a4: dchat: add anchors/ fix ports/ uncomment daemon : @lunar-mining pushed 228 commits to master: fa71b711a1: spec2: concepts page : @lunar-mining pushed 228 commits to master: 494da41475: contrib/dependency_setup.sh: wabt dep dependency added for xbps : @lunar-mining pushed 228 commits to master: 4d411b0934: book: git mv spec2 spec : @lunar-mining pushed 228 commits to master: fecd412ecf: book: botched move : @lunar-mining pushed 228 commits to master: d24248d026: contract/money: Rename "integration" test to "token_mint" : @lunar-mining pushed 228 commits to master: 2f9a9cc237: contract/money/integration: Gather block reward owncoins : @lunar-mining pushed 228 commits to master: 31d0d2f617: validator: Configurable fee verification, incomplete... : @lunar-mining pushed 228 commits to master: daa625d856: validator:pow: decoulbed mine_block() from PowModule so it can be used outside of it : @lunar-mining pushed 228 commits to master: 4fffd4ac2c: script/research/minerd: miner daemon skeleton : @lunar-mining pushed 228 commits to master: 2074625d1d: src/event_graph: request and reply multiple events : @lunar-mining pushed 228 commits to master: 2fe38d699c: remove unused import : @lunar-mining pushed 228 commits to master: cda4521dad: spec: change from blake3 to blake2b and add explainer why : @lunar-mining pushed 228 commits to master: 111c803085: spec: DAO::mint() and DAO::propose() : @lunar-mining pushed 228 commits to master: c83347857e: spec: reword section on blake2 : @lunar-mining pushed 228 commits to master: 77fb9d4321: src/event_graph: aquire locks outside loops : @lunar-mining pushed 228 commits to master: 493fbfe1eb: spec: DAO::propose() nullifier : @lunar-mining pushed 228 commits to master: ce5a92ff21: spec: add money contract with money transfer : @lunar-mining pushed 228 commits to master: d67c3d2029: spec: DerivePubKey() : @lunar-mining pushed 228 commits to master: d42ba0d511: contract: Move POW_REWARD constant to money contract : @lunar-mining pushed 228 commits to master: ab044b02fd: contract/test-harness: Update VKS and PKS checksums : @lunar-mining pushed 228 commits to master: 5d8e4a5451: Revert "contract: Move POW_REWARD constant to money contract"... : @lunar-mining pushed 228 commits to master: d7ef5c25e1: contract/money/integration: Assert expected PoW reward : @lunar-mining pushed 228 commits to master: 8e5e997426: contract/test-harness: Include Money::FeeV1 zk circuit for cached vks : @lunar-mining pushed 228 commits to master: 8087222a86: validator/verification: Allow fee call at any place in the transaction : @lunar-mining pushed 228 commits to master: c7248f44f6: darkfid2: parse network config directly from the config file, not as flattened arg : @lunar-mining pushed 228 commits to master: e46c6dbb6b: sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64() : @lunar-mining pushed 228 commits to master: be4898e0c7: Revert "sdk/crypto: use the same generator for pedersen_commit_base() and pedersen_commit_u64()"... : @lunar-mining pushed 228 commits to master: 6b9ea039d7: spec: add money coin, current day, pedersen commits : @lunar-mining pushed 228 commits to master: 4cd92a8189: dao/spec: rename all mentions of slot to blockheight : @lunar-mining pushed 228 commits to master: 639b4e89df: test-harness: s/slot_to_day/blockheight_to_day/ : @lunar-mining pushed 228 commits to master: d3fae80d5a: chore: Update copyright year in license headers : @lunar-mining pushed 228 commits to master: 3d36a1b382: contract/money/Cargo.toml: added missing darkfi validator feature : @lunar-mining pushed 228 commits to master: 5b104ef6a5: runtime: Minor comment cleanups and log verbosity. : @lunar-mining pushed 228 commits to master: e16e5ceed9: runtime/import: Enable pages assertion in put_object_bytes() : @lunar-mining pushed 228 commits to master: 0d67df0adb: runtime/import: General function cleanup and use darkfi_sdk error codes... : @lunar-mining pushed 228 commits to master: 703666623f: sdk: Apply relevant changes related to 3240221614727e7bb754de6b33397dc90a92ddee : @lunar-mining pushed 228 commits to master: eb2dc41dd7: sdk/util: added block height retrieval functions and use them at appropriate places : @lunar-mining pushed 228 commits to master: 1b19b54099: contract/money/error: added missing error code : @lunar-mining pushed 228 commits to master: 764da6e7c8: spec: add missing merkle tree section : @lunar-mining pushed 228 commits to master: 4f97ed6989: runtime: Begin implementation of host function gas costs : @lunar-mining pushed 228 commits to master: ab3a55c6fe: runtime/import: Subtract gas fee in get_slot() : @lunar-mining pushed 228 commits to master: 929166b412: spec: add dao::vote() : @lunar-mining pushed 228 commits to master: 8a10b292c3: runtime/import/db: Implement host gas cost for zkas_db_set()... : @lunar-mining pushed 228 commits to master: fe28c86fa3: runtime/import/db: Apply gas subtraction to remaining db.rs functions : @lunar-mining pushed 228 commits to master: f5a9cf3e96: drk: drk rewritte skeleton added : @lunar-mining pushed 228 commits to master: e4af7ec436: spec: DAO::exec() : @lunar-mining pushed 228 commits to master: 279fdf6a5f: spec: rename coin params to coin attrs : @lunar-mining pushed 228 commits to master: 5fed38cb8f: runtime/import/merkle: Account for gas costs in merkle_add() : @lunar-mining pushed 228 commits to master: 7e1795b6e8: dao: apply same verifiable encryption for other outputs, to the DAO change output as well. : @lunar-mining pushed 228 commits to master: 317e443b7d: validator/verification: Fix tx fee call index : @lunar-mining pushed 228 commits to master: bb5b015a00: tx: Improve log messages : @lunar-mining pushed 228 commits to master: bf43e8f77d: validator: Account for Schnorr signature verification for fees in txs : @lunar-mining pushed 228 commits to master: f2ef873e32: validator: Add TODO note about RAM usage for circuit VKs : @lunar-mining pushed 228 commits to master: 0800757693: drk2: initialize schemas : @lunar-mining pushed 228 commits to master: 2fdcc78b17: validator/fees: Add gas use calculator for ZK circuits : @lunar-mining pushed 228 commits to master: d23f93c1cf: validator/verification: Account for ZK proof verification cost when verifying tx fee : @lunar-mining pushed 228 commits to master: 34d3206ce8: Merge branch 'net_hostlist' : @lunar-mining pushed 228 commits to master: ef0d1c6f59: chore: update year on copyright text on new files lain: for cleaner branch merge git checkout {branch} -> git rebase master -> git push -f and then when merging: git checkout master -> git merge {branch} the first flow rebases the master into your branch and moves all your commits after head, so when you merge back the you don't get the repush/merge commit yeah that was kinda a mess lol ty on git rebase master you don't resolve any conflicts yeah jsut mentioning, keep the flow noted so you know it future merges btw same flow should be followed when you open a pr in a repo ++ so your pr commits go always above upstream master so you don't fuck up upstream commits yeah mistake here was merging master into remote branch first, then when a few days later ran 'git merge [branch]' on master, it created conflicts so i had to update remote with new commits then merge oh you didn't force push then right? but kinda fucked up the tree no force push anyway we are nitpicking the real thing is to test this :) so organize with dasman to deploy some nodes and we test with each other i have the git thing noted tho thanks since dasman also pushed some darkirc stuff ++ so we test also those and the p2p improvements lain: which are the latest p2p commits? I haven't checked the last fixes 818ceaec4d6ae00a0f5aa0e848b59503d2475c29 and like 5 or so commits below lain: you shouldn't fetch random from whitelist lain: just lmk when that means that there is a chance that you always ping active node, keeping trash indefently since the list is sorted, always check the last N ones since they will be moved on top if they are active effectively having a ring buffer https://github.com/darkrenaissance/darkfi/commit/eda5c69af470bf9d70d2b6c0b7325ab1f4d9364f#diff-5740e6d6d6163f1acf1415ec0e5e1c8d46ed196e80ffa7c0f22c61e3a6cc9a4eR155 in this impl we check 1 node per minute so the N might not needed although I would use to make sure we check everyone on a decent interval (talking about lilith whitelist_refinery(2)) checking from the end/ simulating a ring buffer makes sense : @lunar-mining pushed 1 commit to master: 8f05b489ca: doc: cross out TODO and add hostlist documentation to book nice, glad to see this merged gj : @lunar-mining pushed 1 commit to master: 15d5f7e6d4: lilith: select last element from the whitelist, not random element... ty senpai Nice : @aggstam pushed 1 commit to master: 6ba1fb5947: drk2: retrieval of multiple db records added Hey everyone I am new here and I am hoping someone could help me out. I have fully synced a darkfi node and I am trying to initialize a wallet. When I run the command `./drk wallet --initialize` I get an error `Error: Connection failed`. hey depth we're working on a new testnet rn the current one has been running a long time and isn't being maintained upgrayedd knows more on the specifics ah okay so I should just hang tight for the time being? yes, we'll be deploying a new chat called darkirc soon, could use some help testing that if you're interested otherwise yes working on a PoW testnet rn okay cool yeah I would be interested in that. :) : @Dastan-glitch pushed 1 commit to master: d5faf7296a: dao: small simplification afk today : @Dastan-glitch pushed 1 commit to master: 8028ab4aca: completed DAO spec : @Dastan-glitch pushed 1 commit to master: 76609f7393: spec: minor add to notation.md see you : @Dastan-glitch pushed 1 commit to master: 6b624a79c2: spec: add explainers to DAO functions : @Dastan-glitch pushed 1 commit to master: 00cebdeccc: spec: dao include list of files used for each function : @Dastan-glitch pushed 1 commit to master: 42acaf262f: dao: rename proof/dao-(vote|propose)-burn.zk to proof/dao-\1-input.zk : @Dastan-glitch pushed 1 commit to master: f0ab36d228: dao spec: typos and minor adds : @Dastan-glitch pushed 1 commit to master: 8c96e1646b: dao spec: add missing info on signatures : @Dastan-glitch pushed 1 commit to master: e218881278: dao spec: correct typo !topic bs58 vs base64 standardization Added topic: bs58 vs base64 standardization (by upgrayedd) !list Topics: 1. funcid in coin (by haumea) 2. bs58 vs base64 standardization (by upgrayedd) !topic p2p next steps Added topic: p2p next steps (by beep) : @aggstam pushed 5 commits to master: cebeacd858: drk2: addresses functions added and simplified some internal calls : @aggstam pushed 5 commits to master: cf56a07c3d: drk2: finished up wallet subcommand functionality : @aggstam pushed 5 commits to master: d440003fd1: drk2: Unspend, Inspect, Broadcast, Subscribe, Scan and Alias functionalities added : @aggstam pushed 5 commits to master: 4cc4082bb2: darkfid2: notify subscribers for new blocks : @aggstam pushed 5 commits to master: 613f3b3445: net/settings: use default value for hostlist if not present in args/config Greetings. I'm back! I made a PR a few days back. I'm been trying to get my setup working on a MilkV Pioneer. Following the Dockerfile as a guide, plus some other local edits as required. https://codeberg.org/darkrenaissance/darkfi/pulls/248 Title: #248 - Removed ZKAS build steps from darkirc Makefile. They are not necessary. This is manifesting as a build error on RISC-V. - darkrenaissance/darkfi - Codeberg.org join #memes gm : @Dastan-glitch pushed 1 commit to master: 51440e732f: sdk/crypto/diffie_hellman: remove .clear_cofactor() call which is useless with Pallas/Vesta curves in sapling_ka_agree(). : @Dastan-glitch pushed 1 commit to master: 36a36b1728: kdf_sapling(): add comment about non-constant function call edge case. gm or good evening from australia : @aggstam pushed 2 commits to master: 0317510dd7: sdk/crypto/diffie_hellman: remove unused import : @aggstam pushed 2 commits to master: 1fcbfdded9: drk2: Explorer functionality added : @Dastan-glitch pushed 1 commit to master: 49c1b1c1ec: spec: add note DH in-band secret distribution : @parazyd pushed 1 commit to master: c25cc17321: rpc/client: Trigger request-read-skip when receiving JSON notifications : @aggstam pushed 1 commit to master: 31dd5d6208: drk2: Transfer functionality added !list Topics: 1. funcid in coin (by haumea) 2. bs58 vs base64 standardization (by upgrayedd) 3. p2p next steps (by beep) !topic sparse merkle tree set nonmembership trees Added topic: sparse merkle tree set nonmembership trees (by haumea) : @aggstam pushed 1 commit to master: 531ead2cb5: drk2: OTC swap functionality added !topic drk full regression Added topic: drk full regression (by upgrayedd) i just realized i was disconnected all day XD 10:17 PublicKey::from(wnaf.scalar(&esk_s).base(pk_d.inner()).clear_cofactor()) 10:18 from diffie_hellman.rs (sapling_ka_agree) 10:18 the call here to .clear_cofactor() is useless since pallas/vesta doesn't have cofactor (jubjub did) 10:18 it's harmless though 10:35 https://github.com/zkcrypto/group/blob/main/src/cofactor.rs#L29 10:35 https://github.com/zcash/pasta_curves/blob/main/src/curves.rs#L231 10:53 ok i ran a repo root make test and it passes 10:53 also verified it does nothing in pasta_curves so removing the call sup yo Oy! vey Hai !start Meeting started Topics: 1. funcid in coin (by haumea) 2. bs58 vs base64 standardization (by upgrayedd) 3. p2p next steps (by beep) 4. sparse merkle tree set nonmembership trees (by haumea) 5. drk full regression (by upgrayedd) Current topic: funcid in coin (by haumea) the money::transfer() is not checking the funcid, just the contractid, so there's 3 options here: 1. the contract must hash their data with the funcid and store it in the user data then check it 2. money::transfer adds the function code to the coin attributes and checks it 3. we add a funcid mapping are we adding #3 still? We can yeah ok then that's easy ty !next Elapsed time: 3.2 min Current topic: bs58 vs base64 standardization (by upgrayedd) The enum we usually use for function IDs is u8 context: I'm doing drk rewritte so its compatible with everything new stuff(more on that on topic 5) So on deploy we can generate 255 hashes which would be (ContractID||0), ... (ContractId||255) They should be guaranteed unique drk ftw brawndo: ok sounds good so I saw that we are using both bs58 encoding(external crate) and base64(our own impl) to encode stuff, either to use the string dirrectly(for example print in std out) or to pass them through json rpc calls we should standardize on what encoding to use, so its universal where is base64 used? imho since we have our own encoding without external deps(base64) we should use that base58 is easy to port over and ditch bs58 alltogether We use base58 for addresses and similar stuff and IMO we should keep using it It's better than b64 for that b64 is used in the JSON-RPC transport to encode arbitrary raw bytes base58 excludes chars from base64 fyi that are confusing like 0 or O, and it is double clickable unlike base64 I don't think we need to change anything, both encodings are being used where they should cos base64 has stuff like + in it if we want to keep using both, should we then create our own base58 impl, to eliminate external deps, or its too much? its easy It's easy, however the crate we use is really fast I don't mind using both encodings in different places, just wanted to brin up the topic so we are all clear on whys, since it might be confusing when checking the codebase https://github.com/Nullus157/bs58-rs Title: GitHub - Nullus157/bs58-rs: Another Rust Base58 codec implementation https://github.com/libbitcoin/libbitcoin-system/blob/master/src/radix/base_85.cpp It's under MIT, so we can even copy the code in our codebase if you want ah nice i don't think base58 needs or should be optimized for auditability because it's an infrequent wallet facing op https://github.com/Nullus157/bs58-rs/blob/main/src/alphabet.rs#L196-L201 hehe lol ok if we can integrate that internally then its best case scenario check the libbitcoin impl, it's much simpler than the rust one That's base85 https://github.com/libbitcoin/libbitcoin-system/blob/master/src/radix/base_58.cpp this one ah yeah true oh yeah also I don't know how fast it needs to be really So maybe indeed better to just do a simple impl and kill a dep or two ++ the bitcoin encoding even has some weird stuff so you can potentially simplify it not needed immediately, but good to have as future optimizations thats all, anything else to add? all gud you can just do the most basic dumb impl of base58, no logic to skip leading 0s or anything weird or we could do base 32 so each byte is represented by exactly 2 chars even simpler/easy I'd rather use hex than b32 Let's just keep it as-is yeah sure There's really no need to start changing things i would simplify the base58 encode/decode logic though if we roll our own since there's a bunch of stuff about leading zeros that isnt really needed thats all ++ whatever works best anyway next? yep !next Elapsed time: 14.1 min Current topic: p2p next steps (by beep) cool so p2p upgrade was merged and next is to migrate and test the apps they should work out of the box minus small tweaks like adding the hostlist path to default config beep: even that shouldn't be needed a commit was added to use default one from net/settings.rs yes the serde defaults should be sufficient 613f3b3445090d630aa89be926f9244c5050dd15 but it's just if ppl want to make it explicit in the default ++ ++ so i guess just rebuilding and testing and hopefully no further debugging needed lilith was updated to work with the new stack? lets start with darkirc thats the main app right now ready to be shipped yes lilith should just work, but should be tested w apps etc ++ okay i'd deploy my nodes I'll deploy two lilith tmrw morning for darkirc nice ill deploy darkirc too 2 nodes dnet should just work also, haven't tested it in a while, maybe i should add hostlist debugging stuff to dnet lets organize internally in the next days to test it out ++ cool ty Cool !next Elapsed time: 4.0 min Current topic: sparse merkle tree set nonmembership trees (by haumea) so there's an imperfection in the DAO voting currently when writing the spec, I realized we don't check if the coin used to vote wasn't already spent so currently we expose the nullifier, so after voting, you should immediately move your coins although it's not perfect since if you vote on other proposals still around then you will be linked the solution is a set nonmembership proof inside ZK. hashing the nullifier? then you shouldn't need to move your coins we have a sparse merkle tree impl, could we get that working? then nodes would need to maintain one in money::transfer() for all nullifiers so when the state is forked in DAO::propose(), we can also detect whether coins are spent, not just whether they existed is that clear? haumea: You can go through this and find a way to implement it as a gadget for us: https://github.com/young-rocks/rocks-smt Title: GitHub - young-rocks/rocks-smt: A Sparse Merkle Tree circuit constructed with Halo2 poseidon. IIRC it has some bug/vuln but we can get that fixed with an audit yeah i'm just worried on timing cos I want to finish the money contract i could maybe leave it until the end I need to finish fees DAO spec is finished nice so is SDK crypto That's great just need to do money is there anything else except those? I believe PoW consensus stuff (The coinbase tx and proposal) i guess i'll finish money, then if there's time, quickly try to fix this DAO thing. it's not a blocker Sounds good src/contract/consensus/? Do check out the repo I linked, it has it all cool, we could also make tweaks later and reengage them too they can just do a diff ah no src/contract/consensus/ doesn't have PoW stuff, that's in Money So you'll get to it contract/consensus is PoS stuff ok so everything in src/contract/money basically? Maybe check out Deployooor contract too, it's simple that shouldn't bee needed right now is that all? money contains the PoWReward Just has a ZK proof about secret key derivation aha ok deplooyer too deployooor :D next? yeah !next Elapsed time: 9.2 min Current topic: drk full regression (by upgrayedd) ok so I said earlier, drk is going under major rewrite, to work with latest internal changes, like rpc and darkfid2 thats gr8 news that means that src/walletdb will get yeeted, as drk do all its handling internally Yeah shifting all complex stuff from node to wallet very good right now the plan is to make just compile with everything so after thats ready(in next few days(TM)) we need to do a full regression, as current sql tables(or structs) might not be up to date or new functionalities(like contract deploy and fees) are missing <0xhiro> anyone here knows how can i add lunardao channel? 0xhiro: use /join #lunardao we have a decent darkfid2 + minerd setup to have a local blockchain to test stuff against <0xhiro> thanks :) 0xhiro: also look in main buffer alt+1 and type /set, then search settings there like 'autojoin'... see the toolbar how to edit the settings so when its ready, we need to do a full regression for everything Nice I will probably handle money, haumea can you do dao? brawndo you should go for deplooor and fees i need to focus on spec right now will do next month ACK well I didn't say immediatelly lol ok the API is the same as before I first have to finish the fees API stuff yy chill, just keep in mind for your next steps Then by end of week I should be able to do the wallet stuff since drk is bassically a requirement for next testnet release Deploying contracts already works(TM) just has to be plugged in I will handle the repo cleanup(removing current dkr and walletdb) when rewrite is finished so then drk will be ready for you to hack into thats all next? !next Elapsed time: 6.3 min No further topics thanks a lot for drk rewrite fun meeting bros Sweet feels like we're moving forward in the release cycle ++ thanks a lot people dasman: Can you deploy also a seed node and darkirc? sure well we have half our dev team back again since last year We don't have to use the main lilith ones yet until we test So maybe best actually if you deploy them today and get things in place so we can test tomorrow and the next days ++ ++ building darkirc now gg everyone, bbl i am scared lol plz werk lmao i will deploy some nodes as well tnx every1 beep: the real beauty in coding is when things don't work There is no running seed node for it btw oh rly haha So let dasman deploy it first ;) ++ upgrayedd: i was thinking of a way to exactly that :D hello hi aiya ** to say how to deploy nodes? can we build from main branch or 0.4.1 ? depends for what aiya ircd is on branch 0.4.1 but we are testing some stuff on latest master rn darkirc and lilith namely darkirc would be useful if you want to help test the darkirc <> ircd mirror is missing : test test back ah nvm 15:44:28 [ERROR] [P2P] Network reseed failed: Failed to reach any seeds lol : :D which seeds should i use? sorry got disconnected, till figuring out ircd As I said twice (and now thrice) 16:40 There is no running seed node for it btw 16:40 So let dasman deploy it first ;) XD ah ok well holla when its up and ill deploy building rn XD b-b-building !next Elapsed time: 9.3 min No further topics !end Elapsed time: 0.2 min Meeting ended nice chitchat thanks all cya next week for the latest gossip Good meeting Be sure to leave feedback to your nearest janny : @aggstam pushed 1 commit to master: f9683b867e: drk2: Token functionality added hey guys, is there any point to running a testnet node? I know it is going to be relaunched again and if it is of no help to the development, I can switch it off? kopachke: current testnet no, turn it off darkirc: seeds = ["tcp+tls://dasman.xyz:5262"] altho i ran into a death loop than i couldn't recreate beep: will try a bit more and get back to you : @Dastan-glitch pushed 1 commit to master: 2f0d966491: update darkirc tmux test session beep: checkout tmux_session in bin/darkirc/script, try to stop node 1 the one that advertise its own address, and the other nodes go into crazy mode I also get IO error: broken pipe on node 1 also with these specific configs only node 1 msgs get broadcasted Anyway these are what I've observed so far, the seed node is running and I have a peer running as well for darkirc ty dasman, digging into this now fyi advertise is "true" by default so all these nodes are advertizing (not just node1) gm brb b gm I'm up to chapter 5 of the Rust language book, should I go through the whole thing up to ch20 where you build a multithreaded web server? Would you guys recommend that? yes you will use all of those features everywhere btw are you in ##rust on libera? recommend going there thanks for your input, and no I'm not, not sure what libera is actually? : @lunar-mining pushed 1 commit to master: 63619cf061: outbound_session: downgrade host if we fail to connect ACTION facepalms just a line of code that i forgot to uncomment checking other tings deki: search for libera chat irc, then join ##rust you might need to register ok thanks kinda funny story btw: the other day I was talking with 2 senior engineers at work, and they remarked "forget about C++, start learning Rust asap" lol already way ahead of them yeah it's already in the linux kernel noice wtf codeberg deleted one of my commits! :/ wtf i was working on this all morning and the code is gone that's weird, I'm guessing that isn't a normal thing to happen with codeberg? https://codeberg.org/darkrenaissance/darkfi Title: darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. - Codeberg.org omg https://agorism.dev/uploads/screenshot-1706004302.png well those js maybe are unrelated, but wtf i just lost 2 big commits :/// wtf the code is actually gone? isn't it mirrored to github? and you don't have it locally? no i did git pull to push and then it deleted the commits omg i had to resolve a conflict, but the commit is gone in history wtf that's very bizarre : @narodnik pushed 2 commits to master: 56cc2b1627: switch DAO auth enc to use ElGamalEncryptedNote from SDK : @narodnik pushed 2 commits to master: e0ecada541: dao: use ElGamalEncryptedNote for DAO::vote() instead of AEAD ok i managed to find them using: git reflog --all and find the orphaned commit hashes gm still that was quite scary gm ser Didn't you set up codeberg to mirror github? Then I suppose it would overwrite with github if you pushed something only to codeberg? it seems the mirror stopped working, someone pushed, codeberg pulled and overwrote the commit we should maybe all use codeberg instead Maybe there's a way to make a two-way mirror? it should be two-way oh is it? yep i've been using it for a month now maybe i'll create a log of all commit hashes in case something happens again upgrayedd: Can we change verify_transactions() to return Err if there are any invalid txs? I don't like that the API returns Ok(Vec) for invalid txs set And now with fees, verify_transactions() should be returning the accumulated gas usage brawndo: iirc it returns Err(ErroneousTxs(Vec)) not Ok(Vec) for invalid txs set pub async fn verify_transactions(...) { ... ; Ok(erroneous_txs) } verification.rs:621 With tx fees, for example, we should be using verify_transactions() to get accumulated gas from a block so you can make a proposal with the fees rewarded yeah thats wrong, just change it to Err(Error::TxVerifyFailed::ErroneousTxs(erroneous_txs)) if !erroneous_txs.is_empty() otherwise Ok(fee) okay maybe I forgot to change it just make sure that execute_erroneous_txs() in test_harness expects error and checks that the txs.len() is the expected one Yeah I suppose clippy will tell me hopefully :D I believe that code was before we added the TxVerifyFailed::ErroneousTxs error code, so I probably forgot to update the fn anyway you'll find your way around ++ upgrayedd: Here's my pull request :P https://termbin.com/zj7n haumea: what would happen if I pass a wrong secret in ElGamalEncryptedNote::decrypt() ? brawndo: noice, check contract tests pass tho also tbh I prefer if let Err(e) = foo() { match E { ... } } than match foo() { Ok(_) => (), ... } I did the match to keep existing behaviour ah you mean if let Err and then match mhm ETOOMANYSLOC if let Err is less XD exactly 1 less the Ok() one : @lunar-mining pushed 3 commits to master: d404fe946a: chore: fix spelling on debug statement : @lunar-mining pushed 3 commits to master: 111226ef0e: chore: fix debug path : @lunar-mining pushed 3 commits to master: 7fa973302d: net: fix death loop... actually no its the same lol upgrayedd: plz see commit msg of 7fa973302de6cee915b82cc75258d26d690a1909 (changes downgrade_host to remove_host) i don't see a different solution to this aside from preventing greylist connections, which we don't want to do brb beep: ain't that the not keeping trash policy we discussed previous week? :D yeah but before we said downgrade and keep on greylist now we are saying full on remove, so it's different iirc I specifically said to always remove them, but anyway brawndo: is there a way to multiply a Base value by VALUE_COMMIT_RANDOM in zkas? dasman: can you rebuild and test again? lmk what other problems you encounter haumea: what would happen if I pass a wrong secret in ElGamalEncryptedNote::decrypt()? : @parazyd pushed 1 commit to master: 2305faefb4: validator/verification: Return an Error if there are failing txs in verifying sets beep: fmt soz there is no wrong secret unless it wasn't randomly generated lol nw haumea: so I can decrypt them with anything? haumea: No, NULLIFIER_K does the Base field : @lunar-mining pushed 1 commit to master: 4d4fab1c28: chore: fmt there are 2 keys, the ephemeral one (sender owns the private key), and the recipient one the question is what would happen if I pass a random secret(not randomly generated, just random) to the function would it explode? would it return me trash? thats the Q it will work fine just not guaranteed to protect your privacy, it should be randomly generated what would the returned decrypted values look like? just some random pallases? yep darkfi/src/sdk/src/crypto/note.rs:104 you see it's hashing them with numbers these blind the values aha so it doesn't matter since you ensure a blind exists either way so decrypt at will always have something to return do you know diffie hellman trick? yy so with diffie hellman we get a shared secret was just asking because Result<()> was removed so I was like wait can't it explode then we are deriving blinds from this shared secret, and adding them the Result wasn't doing anything before oh lol XD yeah ;) it's safe but you can misuse the API by not passing in a random secret yeah thats a valid execution wise case tho well it could accept Rng like AEAD but then the function needs to return the secret too maybe that's better brawndo: can i add it to ZK VM? Sure ok cool yeesh ^_^ so when we instantiate FixedPointBaseField, we pass it OrchardFixedBases as the trait params this is an enum with a predefined NullifierK accessed by the chip Yeah it's the constants in sdk/crypto/constants so i think we somehow need to find a way of storing info in the enum, that allows dynamic switching of the NullifierK value No just make your own constant to do what you want to do What is even "dynamic switching of NullifierK? ah ok nvm ic now See line 112 of fixed_bases.rs Well all of that is interesting lol my cursor was literally on that exact line https://github.com/parazyd/halo2/tree/main/halo2_gadgets/src/ecc ah so it's our fault actually it's hardcoded to nullifier k but the struct should hold which const it is pub struct NullifierK; https://github.com/parazyd/halo2/blob/v4/halo2_gadgets/src/ecc/chip.rs#L325-L336 this one, right https://github.com/parazyd/halo2/blob/v4/halo2_gadgets/src/ecc/chip.rs#L344-L364 https://github.com/parazyd/halo2/blob/v4/halo2_gadgets/src/ecc/chip.rs#L366-L378 The rustdoc here explains how they work Variable-base multiplication works with scalars You can get a lossless base->scalar with mod_r_p() although I'm not sure how that works _inside_ halo2 darkfi/src/sdk/src/crypto/constants/fixed_bases/nullifier_k.rs for this constant it does Yeah Perhaps you can impl another struct similar to OrchardFixedBases s,struct,enum, isn't the problem just the chip is specialized to NullifierK which in our impl is fixed to NullifierK const darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:149 NullifierK is a constant so if we change that it should work, no? It's just "k" hashed to the curve yeah but we have: type Base = NullifierK; Yeah I suppose you can maybe impl that for ValueCommitR yeah so something dynamic that can be "specialized" to constants What does "dynamic" mean here? the VM isn't fixed to NullifierK for Base note we have a bunch of code like: darkfi/src/sdk/src/crypto/pedersen.rs:34 It's fixed to the constant type EcFixedPointBae i mean to switch to other constants, not only fixed to NullifierK ok cool trying this now darkfi/src/sdk/src/crypto/pedersen.rs:34 is a specific function for commiting to a base-field value That's why it uses the NullifierK generator Inside ZK then it does decomposition to make sure its base-width yes but it uses NullifierK, when it should use VALUE_COMMIT_V No it should not pedersen_commitment_u64 uses VALUE_COMMIT_V Then in ZK that decomposes to make sure it's no more than 64bits Those two functions are different I already explained this last week if VALUE_COMMIT_V fits inside Fp (which it most likely does), then it should be defined the same False VALUE_COMMIT_V inside the VM decomposes its factor to enforce a range check from 0..64bits NULLIFIER_K does not do this See the chip.rs links I pasted above yes but i'd add VALUE_COMMIT_V_BASE for this That makes no sense so it would work why not? Why would you do that? because when we define the pedersen commit (for example in the spec), we define the function one way it's strange to switch between generators and could cause subtle bugs or confusion That's why they are two separate functions If you want to commit to a 64-bit value you use the 64-bit function If you want more, you use tbe _base() function they are the same function, except _u64 does pallas::Base::from() to the value before calling pedersen_commit_base() there should be no issue in switching them to same generator, it will improve the code I don't like that They are not the same functions ok well we don't have to change it, but this way is strange. the generator is even called NullifierK They use different generators for different purposes The generator name does not matter What matters is what it does in ZK it's not used in the code except the old consensus code so we could remove it actually also pedersen commits are expected to have additive property, but this is not additive with the _u64 variant It should be used to commit to coin value So you perform a 64-bit range check over `value` You don't need the additive property since you're using it to commit to a single coin yeah but _base() is expected to be additive since it has no other use except additive commits that _u64() does a range check is incidental impl detail and a side effect we use as convenience anyway it's not important since it's unused code anyway btw an interesting fact is the probability of a random pallas scalar not being able to fit inside pallas base is near zero it's 2.99 * 10^-51 If pedersen_commitment_u64 is not being used, it is a mistake It is also additive, up to 64-bits Which is also fine in our usecase since the maximum supply of any token is 2^64 not up to 64 bits, it's additive over pallas Base I mean that for value (Coin value) commitments pedersen_commitment_u64 should always be used It is there for that exact reason so that it enforces correct supply yep makes sense If you want to commit to arbitrary base-field values, then pedersen_commitment_base is the function to use For that reason they also have two different generators which give: - distinguishability we could actually have a gadget to convert pallas::Base and pallas::Scalar values - different properties inside zk So you won't be able to confuse one commitment with another I'm not sure that gadget would work like programming does There are probably a lot of implications and risks to doing that stuff it's fine, you can start with Fp which fits completely inside Fq and convert that to Fq I meant going from Fq to Fp ah they are equivalent inside ZK i mean if you have an x in Fp, and y in Fq, showing x ~ y then that's the relation you need, so converting from Fp -> Fq is sufficient for all use cases Yeah then you use EC commits to do computations in Fq We can get Ying Tong to work on this later on She should be available after Feb nice oh wait i'm so dumb we don't need to convert Fp -> Fq we can just use EC mult to compare the values ec_mul_base(x, G_BASE) == ec_mul(y, G_SCALAR) Yeah you have constrain_equal_point which might work biab got an errand cya l8r brawndo: can we rename mod_r_p() to something like mod_fv()? or fp_mod_fv() https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#pallas-and-vesta Title: The DarkFi Book r and p are not defined in the spec Fp = pallas base field, Fv = vesta base field dasman: commits stopped being mirrored from codeberg to github they are being dropped when someone pushes on github beep: yeah, everything works fine now I'll update my nodes right away haumea: I don't see anything weird going on either side :D They're out of sync now, syncing manually not working I guess haumea: I think the codeberg issue is that it finds conflict, therefore rejects the push to mirror also use make clippy and cleanup warnings before pushing : @aggstam pushed 2 commits to master: 8ebcfe3222: drk2/dao: updated to latest changes : @aggstam pushed 2 commits to master: 342106e2ac: drk2: Completions functionality added upgrayedd: yours commits will be dropped I guess Since codeberg does force push and yours failed to mirror to codeberg right now two repos have different heads codeberg is on 769493b0ce3aee5078366ff6736b362203237445 while github on 342106e2ac29a40db436259c03002e6b6401eacf this is the issue, codeberg failed to mirror 769493b0ce3aee5078366ff6736b362203237445 to gh yeah, and codeberg shows no error whatsoever we must solve this b haumea: Yeah sure @ rename, that was just taken from orchard haumea: Canonically the fields are called Fp and Fq where Fq is the scalar field in pallas ok ty p, q in math usually mean 2 primes They are re: codeberg, what do you advise? i have new commits now, should i wait? https://docs.rs/pasta_curves/latest/pasta_curves/struct.Fp.html Title: Fp in pasta_curves - Rust https://docs.rs/pasta_curves/latest/pasta_curves/struct.Fq.html Title: Fq in pasta_curves - Rust brawndo: yes we have them in the spec https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#pallas-and-vesta Title: The DarkFi Book naumea: yeah wait ++ codeberg force pushes we can't change that ffs yeah it's annoying, but somehow the github history overwrote the codeberg one before i think the solution is we all use codeberg github is just a mirror or the other way arround codeberg exists so people can commit with tor https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html Title: The DarkFi Book I know why it exists, I suggested it lol :D true, but then why would codeberg be a mirror forking? contributing assumes you already have write access to repo for external contributors using tor, they still have to create their fork in codeberg i'm using codeberg to commit since a month now we should all use tor to avoid being tornado cashed it's easy to setup and just works once running i don't even need to manage tor, it's running as a system service biab then we should also disable writes in gh mirror so its just a mirror yeah sure only codeberg can write/push to it i've joined codeberg matrix, going to ask whats up goddamn matrix is such a slow piece of shit lol don't they have a libera channel? maybe but they link to matrix I disabled force pushing to master on both sides We can check if there is a way to only enable a specific key to force-push I think you need gh enterprise for that ok hold on for a sec I'll sync the two : @parazyd pushed 1 commit to master: 79c18a16ec: zkvm: add VALUE_COMMIT_R_BASE that was easy lol Should be fine now but pls re-clone the codeberg repo Or do a proper sync if you know what you're doing haumea: ^ ok i'll reclone, just going to save on github Both have same history now But codeberg was overwritten (I picked the patch that was on codeberg but not on gh) how did they get out of sync? weird commits-notifier shows your commit but not mine b haumea: force push was disabled on gh and cb couldn't push the name of he who authorizes appears milord ah never mind, there's a UnicodeEncodeError on commit bot's side you used latin-1 codec Restarted it haumea: Why rename the field to Fv ? There's a discrepancy with the Rust code It should be Fq btw darkirc nodes are updated and running darkirc: seeds = ["tcp+tls://dasman.xyz:5262"] Cool! Any issues you noticed? reported them and beep fixed them, seems everything is working fine Great I'll start one in a bit cool, sent couple test msgs for you to sync No peers I see you connected to the seed but not the peer There's also a directory under ~/.local/darkfi/darkfi/ with a hostlist That's the wrong path I'll delete the hostlist and try again found peer and synced : @parazyd pushed 2 commits to master: 39f7a8828a: net/settings: Do not write a default hostlist to the filesystem. : @parazyd pushed 2 commits to master: cf2a5fcc44: darkirc: Add "hostlist" to config I'm not getting any peers net::session::seedsync_session: [P2P] Greylist empty after seeding Failed to start again can you ping dasman.xyz? 16:34:19 [INFO] [EVENTGRAPH] Syncing DAG from 1 peers... 16:34:19 [INFO] [EVENTGRAPH] DAG synced successfully! Yeah I can ping it But I'm getting no peers from the seed any chance you forgot to git pull and make No Will try on another server 16:40:06 [ERROR] [P2P] Channel send error for []: IO error: connection reset Weird huh but it reconnected [WARN] Greylist is empty! Cannot start refinery process This comes up a lot now Oh my local node panicked thread '' panicked at darkfi/src/net/hosts/refinery.rs:106:26: removal index (is 0) should be < len (is 0) beep: ^ I don't think the correct fix is there though If that line panics, then other code is problematic ugh there's so many locks lol yeah okay also I'm periodically disconnect from peer and connect again : @parazyd pushed 1 commit to master: 2fc0ceeccf: net/hosts/refinery: Attempt to fetch exclusive greylist lock before pinging _maybe_ this helps But there is a lot of locks, this can be simplified 44 44 locks? ACTION is updating nodes Yeah : @parazyd pushed 1 commit to master: badd907efc: hosts: Wrap module rustdoc Okay bodes running My local one connected right away nodes* [WARN] Greylist is empty! Cannot start refinery process this warning is normal and probably should just be a debug statement or info msg ++ was just pointing it out Somethings it's every second Sometimes every 5 seconds it should happen every 5s unless greylist refinery interval is modified in settings can you connect? And see if you can descover my peer in a bit, just cleaning up after dinner rn Aah yy my bad, you're right I just got confused with a time out warn yy sure take your time I don't have much charge left tho will probably save it for later tonight i can do now np I'll leave nodes running I can access them from phone as well ty i am connecting to tcp+tls://dasman.xyz:26661 via the seed Nice so brawndo couldn't for some reason Still not finding peers even on another srv No peers from seed dasman: is the dasman seed a lilith btw? weird that it would send its whitelist to some nodes but not to others peed descovery works? s,peed,peer brawndo: about the locks, everytime we read or write to the hostlists there is locking involved. i'm not sure how that could be simplified beep: ah no it's not lilith Should it be? it should send peers through peer descovery yes protocoladdr protocolseed also lilith has additional mechanisms for insuring node liveness in the whitelist you don't need lilith for testing https://github.com/darkrenaissance/darkfi/blob/master/src/net/session/outbound_session.rs#L251 yeah, would be good to also test lilith, but not necessary peer discovery is triggered only if you get a hostlist sus no that's not correct if we can't fill the slot, we do peer discovery are we sure it gets triggered? did you test it? yes biab okay it's lilith now just to be clear, this is my lilith config: https://pastebin.mozilla.org/7Cymy14v Title: Mozilla Community Pastebin/7Cymy14v (Plain Text) beep: in previous version when we asked addresses from seed or peer, we effectively got 2 sets back, one with our prefered transports and one without, so we can share with peers that use those is this still true with the refinery? b yes transport filtering logic is unchanged it's just been mapped to the new hostlists yeah I mean the refinery knows that that host is not on our transports so it shouldn't try/remove it it should always exist on greylist ah, no the refinery doesn't take transports into acct at all so these peers get yeeted? XD i'm not sure tbh would the handshake fail if it's on the wrong transport? it shouldn't try to connect to them at all ok that's easy to fix when you fetched from hosts you used schemes to filter yes we filter on fetch but not inside refinery we just select a random peer from greylist elsewhere we filter tho thats wrong, refinery should always fetch peers in our transport schemes yes ik we just established this hah lol yeah just restating XD ++ Hey everyone. Just a quick note to say that I've had a very stable ircd instance for the last few days which has been a delight. I'm wondering with those last few commits whether running darkirc on master should be possible? To destabilize my setup again :-) we're testing rn dark-john definitely sub-stable haha How many commits were in your monster merge? Close to 100? I'll give it a whirl on my other computer. And keep this one healthy. too many commits lol so the issue is if we never ping nodes that aren't accepted transports, they will never go onto our whitelist, and we will never broadcast them to other nodes which means that nodes are just sharing info about their own transports wait isn't the list we share supposently containing peers based on % wdym % lets say 80% whitelist, 20% greylist or we just share whitelist? yes 2nd thing just whitelist? we share whitelist, and store recv'd nodes in greylist no it should be like: fill from whitelist, then remaining slots from greylist the 80% 20% thing is when we connect to addrs in outbound connect loop, we have e.g. 80% preference for qhitelist following the transports rule we do that in outbound loop, but we always share our whitelists only, since greylists are considered unsafe so when I ask for example tor, I should get back max N tor peers, and then another max N other transports do you recall how previous peers request worked? ah so you're saying put other logic here that selects from greylist if whitelist doesn't have any e.g. tor so previous logic was: I ask for N peers, for some foo transport then the other sides first tries to grab max N peers for that transport regardless of how many it got, it will fill rest vector from peers not for that transports yeah we still do that but we read from whitelist so I will always get back max 2N peers yeah but right now our whitelist should only have peers in our own transport not the peer requesting peers so the logic should be: grab max N peers from our whitelist for that transport fill remaining vector from greylist since other transports peers will live indefently in our greylist or actually no it should be: grab max N peers from our whitelist for request transports fill remaining vector from whitelist fill remaining vector from greylist since we might support extra transports than the requisting peers what about "if whitelist doesn't contain this transport, select from greylist" yeah thats the 3rd step let me give you a full example I run a node that supports tcp, tls, tor I ask seed and it gives me back: 10 tcp, 10 tls, 10 tor, 10nym I add them all in greylist after refinery I will have in whitelist: 5 tcp, 5 tls, 3 tor and in greylist: 5tcp, 5tls, 7tor, 10 nym after full refinery, my greylist will only have teh 10nym(assume all other nodes where offline) so you come and ask me for 10 peers for tpc and nym I should give you a combo of 10 randoms from my 5 whitelisted tcp and 10 greylist nym and then give another 10 randoms from my 5 whitelisted tls and 3 tor you get them all into your greylist after refinery you will have in whitelist: some tcp, some nym and in your greylist you will have the tls and tor you got from me yy i get u the previous impl didn't take into acct transports at all it did so pretty confident this is what was causing addrs to not propagate i mean the refinery impl oh XD will add this in AM crashing now yy chill when I finish with drk stuff I will also do a deep dive into it since I pretty much need an as stable as possible p2p for nodes testing nice ty https://youtu.be/7HECD3eLoVo?si=8u03A6lZR3xa2ZeV&t=504 Title: Pieter Hintjens - How Conway's Law is eating your job?, Opening Keynote at Coding Serbia 2015 - YouTube :) welcome sylvain whitelist_fetch_with_schemes() fetches peers matching the requested transports from the greylist if there's not enough on the whitelist so the logic is currently: grab max N peers from the whitelist for the requested transports if < N peers available, grab peers with the requested transports from greylist fill remaining vector from whitelist lmk thoughts, forgot we were already doing this yday : @lunar-mining pushed 1 commit to master: 453a712b9e: refinery: only refine nodes that match our transports... what's a greylist in this context? I've seen it mentioned before https://darkrenaissance.github.io/darkfi/arch/p2p-network.html#hostlist-filtering Title: P2P Network - The DarkFi Book ty : @lunar-mining pushed 1 commit to master: 7f3d43f538: protocol_address/seed: don't return if ping_node is false... : @lunar-mining pushed 1 commit to master: fe95e34db5: chore: remove artifact from debug statement : @lunar-mining pushed 1 commit to master: b1511b991a: chore: correct debug statements and code comments gm gm so advertise is set to default true in settings but somehow this is getting overwritten to false in darkirc, i do not understand how beep: user config or arg? it seems our empty hostlist error is to do with the send_my_addrs stuff in protocol_addrs/ protocol_seed, but i'm still digging into rn upgrayedd: i mean when it's not set by user config or arg, it should be true cos it's true in defaults but somehow it's false idk how iirc structopt/arg bools default to false if not present ahhh so you have to set the flag manually to true in each daemon args gotcha, tnx imo it should be false by default yeah maybe haumea said true by default i think per monero settings advertising should be opt in not opt out but happy to discuss if you have different opinions We used to enable it when external_addr was set (by advertising we mean to publish your connection details so other nodes can find you) I'd call that opt-in brawndo: yeah thats opt-in well that hasn't changed beep: then why not use that instead of an extra bool? I mean when thats set -> advertise = true (remaining opt-in) so rn we do: if advertise == true AND !external_addrs.is_empty(), broadcast our addrs but i can't remember why we needed this additional check rather than just using the presence of an external_addr as a defacto bool check if it has any other usage anywhere else if not yeet it and just use if external_addrs.is_empty() { continue } i'm find with just using external_addrs as the advertise check *fine codeberg and github are out of sync again "I think this is exactly the problem. Since you have a two-way mirror, having an update in both ends too close to each other can end up with different history. In this case, both sides cannot simply push without overriding history. And "force push" is an explicit option you need to make." that's what they said on matrix aha so what we were saying yesterday its true, that it needs force push to work properly : @lunar-mining pushed 2 commits to master: 831d17cd48: chore: more informational debug statement : @lunar-mining pushed 2 commits to master: 949f9f0f6f: net: remove `advertise` bool from settings... brawndo: we should rename Fq to Fv or change the spec: https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#pallas-and-vesta Title: Cryptographic Schemes - The DarkFi Book _p and _v are used consistently to denote pallas or vesta Where? The designers of the curves use Fp and Fq to denote the fields So by using Fp and Fq we'd be consistent with their documentation if it ever needs referencing (Which you do once you start going into halo2, etc.) i changed it because it's confusing trying to remember whether p or q is the base field of pallas or vesta Think about it differently Don't bother with vesta we've been using Fp for over a year so it's natural now, but it's not intuitive Just consider p as base field, and q as scalar field Then when you think about vesta, you just invert them It's intuitive because of how these two curves work fp_mod_fv() says more than fp_mod_fq() Subjective https://docs.rs/pasta_curves/latest/pasta_curves/ Title: pasta_curves - Rust I'd rather be cconsistent with this i'll change it Thanks can we all switch to pushing on codeberg today? that way we preserve committing through tor ok can set that up in the afternoon we can still accept community pull requests through github, just the 2 way mirror can fail infrequently (i've been using codeberg for a month so far without issues, it only failed the first time yesterday) It looks like the two repos are in sync though it deleted my commit from this morning 291dcbd50 (origin/master, origin/HEAD) spec: ElGamalEncryptedNote b1511b991 chore: correct debug statements and code comments :/ Can you cherry-pick those and push to codeberg? Yesterday it seemed to work yep https://codeberg.org/darkrenaissance/darkfi/commits/branch/master Title: darkrenaissance/darkfi - Codeberg.org why don't we all just use codeberg? it will be functionally the same, except we update our remotes Well the issue is that it's not syncing, not that we all switch I'm tweaking some settings So yeah it doesn't work automagically upon push to codeberg Only in 10m intervals the issue with 2 way only appears rarely when 2 people push within the same interval Yeah so we could enable the force push, but then for safety, everyone pushes from codeberg I wouldn't call it rarely though :D It definitely happens rare cos i went a whole month without issue force push was enabled back then I guess now the problem was the single fast commits you did yesterday so don't git commit && git push on every commit XD I'm setting it up hold on ok no it's rekt haha Gonna set up a webhook : @parazyd pushed 5 commits to master: d87be16dd3: spec: update DAO section with recent ElGamalEncryptedNote fixes. : @lunar-mining pushed 1 commit to master: 3ac51c06bd: store: reverse the order of hostlists... : @parazyd pushed 4 commits to test: 040285ad52: foo : @parazyd pushed 4 commits to test: b2a78fb78d: foo : @parazyd pushed 4 commits to test: 5f79ca2d9c: baz : @parazyd pushed 1 commit to newbranch: cfcdca6645: testnew : @parazyd pushed 1 commit to master: 354ef3270e: store: reverse the order of hostlists... ok it works codeberg will now always force-push to github thanks beep: Please clone the codeberg repo And then just commit to codeberg in the future upgrayedd, dasman: Same ^ https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html Title: Using Tor - The DarkFi Book just install tor daemon, enable the service on your OS, and set this up kk will change to full codeberg in a bit def darkfi_repo_mirror(branch): subprocess.run(["git", "-C", repo_path, "fetch", "--all"]) subprocess.run(["git", "-C", repo_path, "checkout", branch]) haumea: you mean ln -s /etc/sv/tor /var/service/ ?? subprocess.run(["git", "-C", repo_path, "reset", "--hard", f"origin/{branch}"]) subprocess.run(["git", "-C", repo_path, "push", "-f"]) :D brawndo: do we need the subprocesses? chad_yes.jpg It's just a python flask script that listens to webhooks for pushing to gh? I mean whats the usage (Also renders website and blog) Yeah what I pasted above is mirroring codeberg to github add it to repo .git config then 1) What It's set up wait I thought we have to set it up manually url = https://codeberg.org/darkrenaissance/darkfi XD Title: darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. - Codeberg.org pushurl = git@github.com:darkrenaissance/darkfi upgrayedd: Just make sure you push to codeberg and not github yy I will yeet my origin cd .. upgrayedd: indeed btw this also applies to pull requests So on Github when you have a PR Like this one: https://github.com/darkrenaissance/darkfi/pull/248 Title: fixing hardcoded value for decimal places to constant by deki-zedd · Pull Request #248 · darkrenaissance/darkfi · GitHub You can access the patch by appending ".patch" to the URL Then you would apply it like: curl https://github.com/darkrenaissance/darkfi/pull/248.patch | git am - And push to codeberg lmk if you experience breakage haumea: One question haumea: If someone publishes a valid (unique) nullifier, but it turns out that something in that tx fails, can this nullifier be abused for disabling someone's coin? I assume not, since it's vulnerable to bruteforce anyway, but just want to confirm : @parazyd pushed 1 commit to master: 228aea9926: contract/fee: Enforce that fee_paid > 0 if the tx fails then no state is changed nullifiers also commit to private keys, so you can only mess with yourself okay Was wondering if I need to add more restrictions/constraints around the fee API to not mess something up But it's fine when this is the case Making the fee call constant-sized is nice, then we have a more or less fixed-cost Which also simplifies the call creation since we know how much gas it uses in advance +- some tiny percent (That percent being the sometimes dynamic size of the incremental merkle tree) nice, it's certainly easy to reason about sup everyone haumea: fyi gnunetcan doesn't have --proxy flag should I update my darkirc? haven't updated in like a month anonkun: we are testing some new stuff, better to wait some time ok cool haumea: oh one more thing, when documenting the money contract, skip any faucet-related stuff haumea: This in queue for removal if you're all using codeberg, will github still remain for people who want to try their hand in contributing? deki: yes read what brawndo said above PRs will be applied as patches ah right, thanks : @parazyd pushed 2 commits to master: 98fd142aa4: spec: elgamal enc, cleanup use of subscripts : @parazyd pushed 2 commits to master: d06cffd0f1: spec: add description of the group hash algo brawndo: aha thanks : @parazyd pushed 1 commit to master: b498847676: contract/money: Add missing error to the error enum : @parazyd pushed 1 commit to master: 6e7dc81704: chore: clippy Congratulations! Darkfi named Project of the Year and Innovation of the Year in the Web3PrivacyNow poll :-) https://twitter.com/web3privacy/status/1750198455220183452 Web3Privacy Now - one transaction at a time(@web3privacy): Here we go: #2023privacyproof finalists are here! Congratulation to all the finalists: @DarkFiSquad, @RAILGUN_Project, @nymproject, @GrapheneOS, Privacy Pools. Explore: https://t.co/CBNCSnucGd https://t.co/1hpdZQcvlO I'm trying to compile a and run a node. Everything went fine until I attempted to initialize the wallet and create a keypair foo: whats the error? ./drk wallet --keygen ./drk: error while loading shared libraries: libout123.so.0: cannot open shared object file: No such file or directory ./drk: error while loading shared libraries: libout123.so.0: cannot open shared object file: No such file or directory please don't paste logs oh sorry use pastebin/termbin/pastenym will do sudo apt install libasound2-dev do i have to restart the terminal after install? I don't think so how long does it take for the bloc_sync? just created a full node gm yp s/yp/yo hey : @parazyd pushed 2 commits to master: 74f91dd3bc: dchat: fix mistakes on default config : @parazyd pushed 2 commits to master: 3447394eda: net: make fetch address logic less nested + fix bug... :D draoi: as a general comment, when using vectors as return type, you don't need Option you just return an empty vector and caller does if ret.is_empty() { continue } way less cluttered than using match statements ++ which method are you talking about actually? greylist_fetch_random_with_schemes oh wait, its a single one yeah thats not applied here lol thought it was the fetch_n_random one (comment still applies obviously in general) : @parazyd pushed 1 commit to master: c04667a845: net/session/outbound_session::fetch_address(): simplyfied returns ++ o general update: net code seems stable (TM) on dchat, but getting DAG sync error on darkirc gna get dnet working to help debugging afk for a bit tho as hitting gym also i'm considering removing the 'ping_self' stuff in protocol_addr/ protocol_seed recap: before sending our addr, we ping ourselves to ensure the addr is valid and up date last seen this is what monero does but we could just set last_seen to 0 and let other nodes deal w it (this would be more in line w what we were doing before) biab upgrayedd: What does this mean? https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/money_pow_reward.rs#L57-L59 brawndo: you mean whats the usage of last_nonce and fork_hash? Yeah and what does it mean in this case? If I create another block in the harness, wouldn't the new reward/block have to extend that? (Provided I'm not trying to make forks) yeah each blocks extends its previous, so it includes its info that info is used to produce a vrf okay so it's correct that if I want to create a series of blocks, each block would reference block-1 as its last_nonce and fork_hash? and we use that vrf to produce the blocks rank https://github.com/darkrenaissance/darkfi/blob/master/src/validator/utils.rs#L112-L162 so to calculate each forks rank to find the best, we aggregate all its blocks ranks multiplied by its side(we want bigger forks to rank higher) https://github.com/darkrenaissance/darkfi/blob/master/src/validator/consensus.rs#L613-L643 brawndo: yeah that its correct Thanks a lot :) these parameters are for our PoW consensus remember we don't do satoshis one Yep in our version, forks only exist in the tail, in which we finalize the best rranked fork(minus last block for race conditions) after that finalized sequence can never change btw the finalization logic after some N fork length, is based on shatoshis security parameter that after some N blocks, hash rate needed to mute them becomes close to infite, so their chance of getting forked is close to 0 we just elliminate the possibility to mute them all together and use an N high enough to ensure that our forks after that length can be securly finalized and the those params in the vrf are used so we protect from being able to game a future block rank b : @parazyd pushed 4 commits to master: 2f6bb5748f: drk2: Dao functionality added : @parazyd pushed 4 commits to master: 3062597fca: drk is back in the menu boys draoi: yo can you check the failling net test? yy : @parazyd pushed 1 commit to master: a42fd04bee: store: update test to use new flattened fetch_address logic parazyd- 100x dev lmao lol dasman: You can probably tweak the script to use the commit author :) ++ : @parazyd pushed 2 commits to master: 0e400fb299: drk/Cargo.toml: missing darkfi feature added : @parazyd pushed 2 commits to master: 9d9dd1590c: net/hosts/store: chore clippy : @dasman pushed 1 commit to master: 0567c219bd: bin/darkirc: [commitbot] replace pusher name with committer's : @skoupidi pushed 1 commit to master: cd762c95e0: doc/Makefile: use RUST_TARGET in docs folder path gm from aus wanted to ask: you guys are now in dcon3, when you get to release, does that mean there won't be much dev work to do? Hi Nice, I can see my message in the tg relay Glad to be here and learn hey hey what's up going through the rust lang book! What about you? Good, I am going through the darkfi book and taking a look of each piece nice, it's pretty detailed Indeed I'm noticing that Rust is the core language of the project Even to write smart contracts I'm more a Haskell & functional dev, excited to learn Rust yep, it offers security features that other languages either don't have, or lack. Also making an impact in other projects too For sure Tell me how do I start contributing I saw there is a meeting each monday best place to start is here: https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html then checkout the github repo, that's how I started. But I need to become more familiar with Rust to contribute in a more meaningful way Title: Contribute - The DarkFi Book yeah Monday's there's a meeting in here Damn, I ran $ git grep -E 'TODO|FIXME' and there is a lot to do How you doing with Rust deki ? I'm going okay, a lot of the concepts aren't 'new' to me per se, since I come from a Python/C background, so far the only issues I've had are to do with the data types emphasis Static typing right? yep Once you get accostume to it you never look back it prevents a lot of problem now when I code let's say python or js I feel kind of naked yeah lol I mean I'm used to data types from C, but Python is a bit more lax it's good though, I was about to go all in on C++ because I started using it at work last year, and I still am, but I'm just devoting extra time to Rust now Yeah, same, learning Rust seems a reasonable next target to leanr. Did you see some part of the project that u want to start to contribute? well for now it's just identifying low hanging fruit, stuff I can do given my limited understanding, plus from what I can tell they've done a lot of the heavy work and nearing release I can imagine, let's catch that fruits, any idea until now? at least to start to study well what the other devs here have suggested: do the tutorial on writing a p2p app, p2p collab tools, groth16, elliptic curve group law hey ash I gotta head out, will be back later I'm going to sleep, see you tomorrow hopefully and nice to meet you nice to meet you too :) ;) \exit :) gm greets herro greets ash, nice to see some functional in here... was reading about haskell's powerful type system recently... interesting rust has a somewhat approximation with traits, but not as natural to use as in haskell where it's a core lang construct gm all when the project gets to release, does that mean there won't be much dev work to do? There will always be things to do :) good point! Guess nothing is ever 'finished' upgrayedd: https://github.com/darkrenaissance/darkfi/blob/master/src/blockchain/block_store.rs#L112 upgrayedd: What goes in the Vec ? brawndo: https://github.com/darkrenaissance/darkfi/blob/master/tests/blockchain.rs#L77-L91 tldr: PoW blocks have a single slot, containing the PoW parameters information while PoS have >=1 we use slot as a convention, to not have to create extra stuff in the future oh I see Didn't know about this test file :D I know having the PoW/PoS terms mixed up is a bit challenging at first alas, futureproofing :D *nod* deki: lmao i wish last night i was looking at p2p code and going "i have to rewrite a lot of this" : @draoi pushed 3 commits to master: d60bb8bf33: manual_session: bugfix... : @draoi pushed 3 commits to master: 715f6c7a86: channel: fix incorrect API usage : @draoi pushed 3 commits to master: cecf284cef: net: create system::run_until_completion() to ensure ping_node() does not create zombie process... haumea: why do you say the p2p code needs to be rewritten? because there's a receive loop in Channel which uses Arc that can keep Channel running even if it's shutdown it should probably just upgrade as needed and self destruct when channel is stopped and i'd like to look into AsyncDrop so you don't need to stop() a channel explicitly also i want to make all the ownership and process explicit in a spec doc thanks for the info, I know you suggested to me to make a p2p app to get a better grasp of Rust, seems like that would be helpful for rewriting the API won't change, just the internals : @skoupidi pushed 1 commit to master: 9a2fad2c0f: drk: replaced rest hardcoded balance base10 decimals with the const gm @deki well, for sure when this is ready... the challenge would be about creating the dapps that empower the users To write a p2p app seems fun @deki, once I finish the docs it would be my next step greets @humea, yeah indeed Haskell type system is robust af, and overall learning functional paradigm opens your mind a lot as programmer. I highly recommend it. ash: you have any math background? if you know matrix/linalg somewhat decently, you should check out groth16 algo. you can pick it up real quick this is not a test ACTION waves hey, learning to use this irc chat :) welcome new here, and want to contribute ideally, would like to dive into the rust crypto/math stuff have some considerable python and devops skills as well v cool, maybe you already saw: https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book including nix, the language of choice for reproducible builds yes, thanks saw this and will be going through it I have yet to get setup with tau/taud, the task manager you all use hey, very happy to be here :) so, ok, maybe as a start I will look into the following: python bindings, tests, and documentation ... just as a start :) humea: I'm working in the groth16 algorithm right now from the part of the verification, not the core of it because there is a mathematician in the team. But definitevely interesting taking a in depth grasp. I'm reading the moonmath book to get some idea of it from the mathematical side Is there a tutorial or something regarding the groth16 protocol? pebble: we are testing new p2p code, then we'd migrate to darkirc (ircd+history) and we'd deploy tau dasman: Should the dasman.xyz:5262 seed node be in a healthy state? Not succeeding to find any peers for me, with the tip of master. Or anybody else running a public node? I'd like to step through some of this p2p code against the tip. dark-john: you get send_version() error? Cuz i got that just now Not in the log. If there a higher verbosity setting you have? Or you just see this in the debugger. fyi - I zapped my TOML config file, let it generate the default and then just switched to your node only as the seed. No other non-default settings. Fairly standard looking STDOUT. Goes arond the loop 3 times, never finds any peers, so never gets thew DAG synced, and then quits with DagSyncFailed No, it's an [ERROR] Try now, worked right away for me grepped. So send_version() would be an error with text output. right So you ARE getting that Error every time? What are you running as your own seeds[]? You could get DagSyncFailed if you're not connected to any peers, you'll try for 10 times, if you see synced from 0 peers then you won't sync and get that sync failed That is exactly the situation, yes. Looks like there are a few different counters, so I am confusing the counts. seeds = ["tcp+tls://dasman.xyz:5262"] Attempts at seeding hosts, attempts at syncing event dag Are you running two nodes? One for testing as well as that one which is running? you'll be connecting to seed everytime until you get your outbound slots filled yes Should be anyway! I tried to run dnet on the side to help debug, but it is not working right now. Think stepping through the debugger is likely my best approach. dnet should work fine sec Oh - it's worked this time, it seems. to sync the dag at least sending messages is not happy No connected channels found for broadcast Could you send a message to dev on that network and see if I can see it? sent yeah I'm getting that "no connected channels found for broadcast" as well balls. thanks for trying. Let me get my debugger going. Just for clarity - you have one node only running (publicly visible as dasman.xyz:5262)? That's the seed node There is another node running as a peer Right - and you have another node locally testing right dasman.xyz:2654 Is your seed node an instance of lilith? Plus another one locally testing It is yes So you have the publicly visible seed instance (lilith executable), another publicly visible node instance (darkirc), and then the local darkirc instance? Correct 22:11:28 [ERROR] [P2P] Read error on channel tcp+tls://dasman.xyz:5262: IO error: connection reset 22:11:28 [ERROR] send_version() failed: Channel stopped What does lilith do different from darkirc? They are both using the same p2p components, but lilith is there ONLY as a seed node? Not running any "application" on top of the p2p? And presumably lilith is also actively pairing too? And then the two darkirc nodes have that network layer, but then also the application layer on top? And seeds[] could point to any p2p application instance, but lilith is the simplest way to seed the network because it is doing exactly and only those p2p bits so should be very stable? (when the p2p itself is stable, of course!) I think upgrayedd or beep would answer better, but afaik lilith has some extra protocols for peer discovery, plus it can have multiple seeds (networks) e.g darkirc and tau Has your lilith node been running for a while? dark-john: lilith: A tool to deploy multiple P2P network seed nodes for DarkFi applications with a single daemon. Mother of all daemons dasman: it doesn't have any extra stuff I thought I read that somewhere in your conversation with beep it just spawns an app-agnostic seed for each configured network :) upgrayedd: hey! that was probably the whitelist cleanup since lilith acts as a seed, it must also keep track of nodes liveness per network ah then, my bad to have up-to-date peers to share right so to answer fully to dark-john: yes ash: youtube has a fair few videos of people explaining groth16, also a number of articles. I've been going through this one: https://www.youtube.com/watch?v=VQyDSxB9Bls Title: Zero-Knowledge Proof: Groth16 - YouTube fyi the net code is not stable rn Looking at the lilith toml file, would the tsv files referenced in each of the network sections just be manually created? There is a version number there per network, but I don't see a matching version number in the darkirc toml. Is that just implicit? (some number in darkirc code?) i would not suggest deploying nodes/ large scale debugging efforts rn ok. Any way we can help? all good, i should have more info 2m dark-john: those tsv are from previous version, where lilith kept the seeds manually, but thats integrated into the new greylists impl so they are obselete and yeah it was created on runtime what's in your lilith toml, dasman? version number configured there will correspond the to application version which is exchanged during handshake there's a bug inside message.rs/ message_subscriber.rs where version messages are being sent and recv'd but not picked up by the subscriber so with that you can have a seed for both darkirc 0.4.1 and 0.5.0 just gathering info atm Right, so implicit versioning on both sides, and a runtime validation that they match? gn dark-john: https://pastebin.mozilla.org/FBTwP3MG Title: Mozilla Community Pastebin/FBTwP3MG (Plain Text) s,seeds,peers So that version configurability is ONLY for lilith? So that lilith can be running multiple parallel networks with different versions? And 0.4.2 is the current/latest, right? yeah its only for lilith actual app nodes use their build version number in version exchange message Right Are there multiple code-paths in the network stack code then? So you can create a 0.4.1 p2p instance, a 0.4.2 p2p instance, etc? one correction: yeah the hostlist is for the current impl, still generated and filled on runtime well, lilith code is like 400 lines(including comments) check it out :D OK - will do I've gotta say, I don't like the idiomatic use:: patterns in Rust, with everything resolved in that section rather than more explicit namespacing in the code. you comming from cpp right? XD Indeed I am. Just stuff like bringing a very common name like Value or Error in. And then using it in function. It's extra mental load. It's fine for one level, but when you're flattening multiple levels, it hurts my head! Anyway ... reading. the code is way cleaner tho, and doesn't allow for import errors No, I am sure it will resolve any clashes. Terser, I would agree with. Clearer ... not so much. I dissagree, but I guess its just different way of thinking I am not Rust idiomatic in my brain yet. Sure I will get there. I wouldn't say this is something rust brought even cpp allows defining file-global namespace its just dev explicitness thats really common in cpp, compared to other languages It's just a pattern, yes. so imo its more of a habit rather than something fundamentally different You could have done exactly this with C++ "using", but it would not have been idiomatic to have them spelt out so deep. more common to just bring in the top-level namespaces. you can do that in rust to, like in python with import * from foo std::vector, etc. used in the code. but thats even worst imo why not explicilty define what you are using? I don't understand the argument against that Fair enough. I'll stop boomering. Gotta run for a bit. lol actually its not boomering XD dasman: still here? upgrayedd: yeah speaking of lilith I opened the code and see this todo: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/lilith/src/main.rs#L339-L342 Title: darkfi/bin/lilith/src/main.rs at master - darkrenaissance/darkfi - Codeberg.org I pretty sure it was resolved any chance the commit gone to the shadowreal? when? if in past month then maybe I don't think so but maybe my mind is tricking me let me fix it real quick maybe you solved it in your head but forgot to actually write the code XD yeah probably XD : @skoupidi pushed 1 commit to master: a7120c6fca: lilith: remove missleading todo... dasman: I guess I just forgot to remove the TODO comment, since the version attribute was already there XD lmao best kind of fixes: just remove the todo XD XD Part 2 of the Amir interview is up: https://twitter.com/BobSummerwill/status/1751128306207150415 Bob Summerwill(@BobSummerwill): Part 2 of @Narodism interview, this time on @DarkFiSquad with @RiceTVx: https://t.co/4KpzdI9Gd0 The part actually about darkfi :-) ash: did you see the script/research/zk/ dir in the repo? also check crypto section of the darkfi book pebble: lmk when you're around so we can exchange keys and get setup gm haumea: I added zk proofs and sigs to the fees and it significantly makes them bigger haumea: So I think we could just divide the gas by 10 or 100 haumea: And have that as the actual cost ++ is that running time or just the size of the tx? i'm guessing latter Signature verification is based on tx size ZK has some lazy algo: https://github.com/darkrenaissance/darkfi/blob/master/src/validator/fees.rs#L28 The latter can be tweaked, but I still think we should just div At this point even the fee call is 0.4 DRK oh yes ofc it should be reduced was just thinking about the relative costs / the estimate methodology anyway i'm not too worried about this, we will get it right over time Yeah so on a high level: 1. https://github.com/darkrenaissance/darkfi/blob/master/src/runtime/vm_runtime.rs#L162 Every operator/opcode in WASM is 1 gas. I don't think we should bother with changing this. (For example Cosmos uses the same logic) aha great It's difficult and time-consuming to go through every possible WASM opcode and have some sensible accounting that makes sense 2. https://github.com/darkrenaissance/darkfi/blob/master/src/validator/fees.rs#L28 the wasm ops are all quite cheap anyway The ZK proofs are accounted for with this algorithm. It should probably be tweaked by doing certain benchmarks. 3. https://github.com/darkrenaissance/darkfi/blob/master/src/validator/verification.rs#L511-L513 yeah so i think some of those gadgets like Ec, when added increase the circuit size, but then further ec ops won't affect the zk circuit until you get pushed to the next power of 2 for rows Signatures are fixed_fee_per_sig*n_sigs + tx_size haumea: Yeah that's likely true. Later we'll have a dynamic table configured when there is time to build such an algorithm https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L310 Using this https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L377 It can be fed into a configuration function So a proper algorithm would know the optimal rows and columns, depending on what is requested for the circuit You could even have a preference between proof-size vs. verification-time Or whatever metric ideally this optimization could be done offline before zkbin is deployed on chain, then knowing the cost is a straightforward calc from the rows and cols used but that's not what we have here, so using the ops in zkbin is a good second method It's not related to zkas but the actual vm Nothing would change in zkas like right now we load sinsemilla chip for every circuit, but not every circuit uses it. possibly without that chip, you reduce the table by 2 cols... although i think EcMul chip also requires loading it? No I think only Merkle stuff uses Sinsemilla There's the hash and the lookup table unless i misunderstood you, i was saying it's because the zkbin consists of ops that operate halo2 api to build a table, whereas the cost comes directly from the table layout itself s,hash,& function, Yeah my idea of the ZK cost is the halo2 cost Since the VM eventually will end up self-optimizing, we'll likely have to change the cost algorithm ok, yeah so as an example, just loading sinsemilla will increase the table cost by 2 cols since it introduces 2 new commitments I just did this as an initial solution Yep correct yeah nw, don't want to be pedantic, but we can create an accurate cost estimator later It's cheap to configure the circuit, so we'd just do a bit more computing to have the actual optimized cost ok You never end up verifying the proof since you know if there is enough fee in advance (Same for signatures) Only wasm needs to be executed to pick up the cost there cool the old zcash channel layout was better this new one is way too fragmented gm : @draoi pushed 2 commits to master: cbfef54aab: chore: remove artifact from protocol_version : @draoi pushed 2 commits to master: 5aa187913f: refinery: bugfix... : @draoi pushed 3 commits to master: 1deb70efc5: doc: fix typo in services.md : @draoi pushed 3 commits to master: ffee76843a: dchatd: bugfix... hey I was going through the code and noticed there are 2 instances of decode_base10 where there's a hardcoded value of 8. I know the other hardcoded values were changed recently, is this meant to be hardcoded: https://github.com/darkrenaissance/darkfi/blob/deb784d68dd15d34e2eb6dc4c4d7b6c50ccb1610/bin/faucetd/src/main.rs#L476 also on line 733 in the same file the faucet will be deleted soon so don't worry too much about that particular instance ah ok, how come? Was it only used for test purposes? it was used to airdrop coins for new users to do PoS mining, but PoW doesn't require any airdrop to get started okay, thanks for the explanation thinking of building a cpu mining rig for xmr and later darkfi https://xmrig.com/benchmark?vendor=amd Title: RandomX Benchmark - XMRig AMD Eng Sample: 100-000000053-04_32/20_N can generate more than 44.09 USD monthly income with a 44317.76 H/s hashrate on the XMR - RandomX (XMRig) algorithm. need to find that CPU lol errorist: threadripper 7995wx has the tray id 100-000000884 so I reckon that one is a heavilly binned version/engineering sample of that line that score is with 20 cpus, with each costing 10k, so we are talking already 20k just for the cpus a dual socket server mobo will probably cost you upwards to 2k(since last gen) all that excluding ram, psu, ups(for 24/7 operation/protection) lol not 20cpus, 2 cpus XD yup, apparently these eng cpu's only run with specific supermicro mobo with an older bios yeah these samples are usually vendor specific orders tho at 45$ per day(I don't know if you took power costs into account) you will need ~445 days for full roi not bad for a product of that caliber just my luck there's a bug in radeon 7700 xt linux cards i had to bind a keypress in my WM to do rmmod amdgpu; modprobe amdgpu test test back humea: I'm looking the Groth16 folder, let's do that after the dev meeting this monday ;) haumea: * gm hackers greets ash: just look at groth16.sage, the groth16/ folder is not really needed unless you're doing that tutorial : @zero pushed 1 commit to master: 43621729fe: spec: money xfer params !topic remove dead code from money Added topic: remove dead code from money (by haumea) I'm working on it if you mean faucet stuff i also meant the functions in MoneyFunction Stake/Unstake and anything else there but why? Stake/Unstake is used by consensus consensus contract are we keeping consensus? we are going to deliver the spec to auditors for review, and i need to spec everything anything unused should be deleted *deliver the code i mean you don't have to delete unused stuff why are we not keeping consensus? because then it needs to be specced and auditors will review it the money contract should contain just the minimal code we use, or at least the functions should be deleted from entrypoint.rs and MoneyFunction in lib.rs so they're not callable or should i spec these contract functions too? or are they unused/deprecated so we ignore them? ACTION back online they have an already good documentation so speccing them should be easy they are not deprecated, they are the basis for the PoS transition if we remove them, we must also ditch all other PoS futureproofing we have in place as then that becomes obselete which then will make the PoS transition 1. needing to restore these stuff, 2. Block struct rewrite and having different storage for blocks version (simmilar to what happened in ethereum) i'm just unsure about shipping code that is audited for a future hard fork, which is likely to change anyway they will be re-audited in the future if thats what you mean otherwise I'm saying the delema is: keep the futureproofing bloat now, or nuke everything for a more clean current impl? with the implecation of having to handle different block structures in the future Why would they have to be audited if they're not callable? they are callable So disable it that's what i'm asking You asked to delete it, which I don't think we should Unless we will not do PoS ever they are also part of block handling, since we store consensus related stuff in Slot i asked whether we should delete the code or disable the function someone can say that since PoS is not callable, then why doesn Slot exist in the first place brawndo: can you also clarify if this is correct: GenesisMint - create a new token TokenMint - mint some tokens TokenFreeze - prevent minting new tokens with TokenMint GenesisMint is a special function applicable only on genesis block GenesisMint is minting some initial supply for the genesis block TokenMint is minting arbitrary tokens - NOT native tokens TokenFreeze locks _further_ minting of some minted arbitrary token ok, how is that new arbitrary token created? with TokenMint? So e.g. if you want to create a fixed-supply token, you'd execute those two calls atomically in a single tx Yes TokenMint ok thanks native tokens are minted through mining blocks i'm going to comment out Stake/Unstake from lib.rs ok so they should be uncallable Don't bother with the tests/test-harness since I'm deeply into that bcos fees and stuff i'm not concerned with any test stuff so all good haha wait I again ask: if we disable those, the natural next question is: why do we keep rest future proofing? ok i'll just spec them, they're tiny functions anyway if i try to disable them, it breaks the consensus contract // Spend hook should be zero so there's no protocol holding the tokens back. if input.spend_hook != pallas::Base::ZERO { this means DAOs cannot stake haumea yeah its just solo staking for now you don't need this contract, we could use spend hook and user data for this so to stake, you create a new coin where spend_hook = consensus::unstake(), and you prove this in consensus::stake() Nah since there needs to be a grace period between staking - proposing - unstaking ^^ And also you need to completely separate the staked coins vs native coins you can do that They're two different sets of coins the grace period works Extra contract is the safer and simpler solution yes it's two different sets of coins, because the native tokens are locked they are not locked this call is duplicate functionality, it even introduces errors they are burned and you mint consensus coins They're burned i'm saying with spend hook and user data, we can do this in a better way using existing functionality in money::transfer() having the extra call introduces bugs and more room for error you can do grace periods, and minting new coins you can even anonymize when the token was staked (when unstaking) !deltopic 0 Removed topic 0 upgrayedd: i'm just talking about the functions in money contract, and maybe consensus contract (optional, i can also write it without those money functions) what other consensus code is there? proposal proof and unstake request also genesis stake, where we directly mint consensus coins in genesis block !list No topics Yeah that was so we could have initial staking since no other way of distribution for voting on first block oh gawd fees are working WHEW thats great news how do you plan on implementing a wallet? Because at the moment isn't it all done via the CLI? Or will it stay that way? : @draoi pushed 12 commits to master: cf431cbd69: dchatd: downgrade version to match master : @draoi pushed 12 commits to master: 78964589ca: inbound_session: activate subscriber before accepting inbound connections... : @draoi pushed 12 commits to master: 7959d9741c: channel: safely shutdown the channel when an error is triggered : @draoi pushed 12 commits to master: e631e8ce68: doc: minor tweak to channel.rs doc : @draoi pushed 12 commits to master: ae154de787: chore: fix typo : @draoi pushed 12 commits to master: ccf660b924: chore: fix comments and cleanup : @draoi pushed 12 commits to master: e9e3bd41ca: Revert "channel: safely shutdown the channel when an error is triggered"... : @draoi pushed 12 commits to master: 4cb76df1f0: update Cargo.lock : @draoi pushed 12 commits to master: 195dfb6935: chore: cargo fmt should work now test test back still need to test darkirc, but works on dchat Hi! Good morning! !list No topics !topic net update Added topic: net update (by draoi) deki: it's 80% done https://github.com/narodnik/fagman Title: GitHub - narodnik/fagman: Facebook-Apple-Google-Microsoft-Amazon-Netflix super app ding ding dong !start Meeting started Topics: 1. net update (by draoi) Current topic: net update (by draoi) hey Hey ohayou we fixed some bugs in the net code 78964589ca02d48782d3d598ba4d70b4b58ed3e2 589a847205c8157dda9f9ff788dfc8c83c764011 the 2nd one is not rly a bug but rather a change to the inbound session defaults i need to rewrite a whole bunch of net code to be less fragile will do it after the spec, should be easy enough the 1st one was a bug tho we should also spec it too now the refinery and seedsync is working as expected, however after just quickly running it on darkirc i notice some strange behavior Nice @ fixes awesome! I'll update darkirc nodes no plz wait i want to test a bit more with dnet etc Oh okay haumea: What would you like to clean up in net? There's still some flaky code regarding buffer allocations and stuff, which should be checked and improved well firstly the main_receive_loop() takes an Arc, so ideally it instead takes WeakPtr, just upgrading as needed so channel ownership becomes much easier yes that as well needs to be fixed Yeah you were saying there were still channel references that were not dropped fully? right now it's ok, but you must call .stop() on a channel otherwise it keeps running the receive loop, ideally when all owners are dropped, the receive loop will auto stop too ok that's it from me If you can find a way for main_receive_loop to just be main_receive_loop(&self) then the problems would be solved you cannot in async code sadly ic Then I suppose would be worth it to skim where channels are being cloned And make sure there's no debris it's just the receive_loop, so instead of taking Arc, it would take Weak and then upgrade specifically where it's needed let packet = match message::read_packet(reader).await { this line is majority of time in the loop and it doesn't need the channel so we're good another interesting thing is AsyncDrop i can make a util like this: https://github.com/t3hmrman/async-dropper Title: GitHub - t3hmrman/async-dropper: 🗑 async-dropper is probably the least-worst ad-hoc AysncDrop implementation you've seen so far. aha that way we don't need stop Hi How does it help/work? Hi ash so right now it's non ideal because the receive loop is created by channel, ... but it also owns the channel > How does it help/work? we don't need to explicitly call .stop().await to close the receive loop it happens automatically when scope ends But so does an Arc ofc it's not needed since with upgrade/downgrade for recv, it will just fail once channel expires We use stop() to trigger a notification for it to be removed from the set of channels we use stop() to close the receive loop self.receive_task.stop().await; hm yeah I still don't see how AsyncDrop closes it AsyncDrop means you can create a destructor to call stop() automatically when scope ends But your master references are in p2p.channels hashmap So that needs to be garbage collected ahh I see now Yeah ok Understood sry took me a bit :D anyway it's nbd, it's not even needed just makes things tighter / less error prone i just need to go through it all with a careful eye like for the contracts there's some refinery stuff that still needs to be cleaned up will finish v soon 78964589ca02d48782d3d598ba4d70b4b58ed3e2 actually fixes a race condition that only happens on localhost because of 0-latency *nod* Next? ++ !next Elapsed time: 18.6 min No further topics :) Ok just quickly, I'm deep tweaking the contract test harness tx fees seem to be working So I'm adding that to all the calls so we're basically fully functional after contract deployment? And while at it, also making the benchmark/stats more fine-grained so we can see how much time specific things are taking to exec/verify ++ cool haumea: Contract deployment is already implemented It's working already in git master ah that's great Just need to plug in the fees into everything and we should be g2g :D we will need some kind of benchmarking app maybe where we can do 100 iterations of a tx and take the average time we will likely want to try different configurations, zk contracts and so on I'm making event graph sync even faster with splitting missing parents on peers (using async_iter for that) but somehow it makes a tiny bit slower, so I'm working on that nice thanks a lot haumea: Yeah we'll get there eventually once all the pieces are in place but that's really good, we're basically done, right? haumea: Then we'll have all the high level functions to run anything ++ Yeah upgrayedd is working on the wallet ah yeah that too We still have to do merge mining ah ok got it The p2pool dev got in touch and that bridge, no news on that https://github.com/darkrenaissance/darkfi/issues/244 Title: Merge mining with Monero · Issue #244 · darkrenaissance/darkfi · GitHub atomic swaps not bridge yeah Yeah atomic swaps is lingering https://github.com/darkrenaissance/darkfi/pull/246 Title: feat(swapd): begin atomic swap implementation by noot · Pull Request #246 · darkrenaissance/darkfi · GitHub Should ping noot cool i like the idea of moving into the end phase and just doing hardening / code cleaning boring honest work :D end? wfm !end Elapsed time: 9.7 min Meeting ended !end Elapsed time: 28442368.5 min Meeting ended ty al gg Hi I'm ash and new here. I already have experience with blockchain, work fulltime developing smart contracts in Cardano and developing a groth16 validator. Interested on learning Darkfi. How is the work dynamic and any suggestions how to start. no we must vote hey ash https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book haumea saw the groth26.sage file if you can show competency and make commits, you can get hired Initiative is important ash: check that contrib guide above, if you want to learn groth16, can also assist with that. do you have any linear algebra knowledge of matrices? yeah we value initiative above all Cool! I'm want keep my path in the zk development. file:///home/narodnik/src/darkfi/doc/book/crypto/zk_explainer.html ah ffs rip lmao doxxed ash: https://darkrenaissance.github.io/darkfi/crypto/zk_explainer.html Title: ZK explainer - The DarkFi Book x D I'll be heading off for a bit, cya later o/ Mmmm I'm working towards a having more math background, nowadays I work on the contract engineer side and my paals are doing the math effort. u.u But I'm really newbie in math overall don't know linear algebra ash: https://darkrenaissance.github.io/darkfi/philosophy/books.html#abstract-algebra Title: Books - The DarkFi Book start with the pinter book also https://darkrenaissance.github.io/darkfi/crypto/reading-maths-books.html Title: Reading maths books - The DarkFi Book Thank you, saved the links. Btw, I'm curious what is the ledger model of darkfi? Some form of extended-UTxO? ash: It's a utxo model, close to zcash if you look into that oh I see, as far as I know zcash doesn't have smart contracts, so how is the state managed across transactions in order to allow smart contracts? Imagine that there is a way to attach data to a utxo so a smart contract could read it to perform validations, right? ash: take a look at the spec and architecture sections of the book for example https://darkrenaissance.github.io/darkfi/arch/anonymous_assets.html Title: Anonymous assets - The DarkFi Book https://darkrenaissance.github.io/darkfi/arch/smart_contracts.html Title: Smart Contracts - The DarkFi Book .etc : @zero pushed 1 commit to master: 92387af2ab: book: remove section on parallelized tx verif : @skoupidi pushed 1 commit to master: 16103b84a7: Night of the living dead : @skoupidi pushed 1 commit to master: 025912d245: doc/Makefile: use correct darkfid rpc files : @skoupidi pushed 1 commit to master: c0f1038277: doc/arch: removed consensus contract related stuff : @skoupidi pushed 1 commit to master: 4648b8fb26: darkfid/tests: disable pos test Good (Y) hey all brawndo: for token mint, similar to how freezing works, i want to add something to control which contract functions can mint specific tokens will think it over a bit more, but it's a small patch and seems the best way test test back hello, don't think my previous message went through it's raining here and the internet always stuffs up when it rains you guys have mentioned knowing linear algebra, is it mainly knowing matrix operations? Or does other stuff come into it like eigenvalues/vectors etc? just basic matrix operations https://agorism.dev/book/math/linear-algebra/groups-matrices-vector-spaces_james-carrell.pdf ch3 of this book, although ch4 for enrichment is good ch2 is a good primer on abstract algebra fundamentals if you've never studied them before the main thing is to be comfortable working with matrices that it feels natural, and you understand the different forms of multiplication https://ghenshaw-work.medium.com/3-ways-to-understand-matrix-multiplication-fe8a007d7b26 Title: 3 Ways to Understand Matrix Multiplication | by Glenn Henshaw | Medium oh nice, yeah I actually know a lot of that already about matrices, I did a lot of math when I was doing my degree and matrices were pretty easy probably need to refresh some of the ch4 stuff, but it's nothing new thanks for the links, it's a good way to gauge what I know. Currently just focusing on getting comfortable with Rust and learning groth16, but matrix operations are straight forward (theory wise at least) https://blog.lambdaclass.com/groth16/ Title: An overview of the Groth 16 proof system ty, was following a youtube video series that goes over tornado cash too I will bbl gm : @zero pushed 1 commit to master: 32bf65a1d8: DEP 0003: Token Mint Authorization brawndo: https://darkrenaissance.github.io/darkfi/dep/0003.html Title: DEP 0003: Token Mint Authorization (draft) - The DarkFi Book click [ approve ] i can code this up very quickly it's basically just adding an intermediate layer where token IDs can choose which logic they use for issuing coins/freezing minting then we move the existing logic to another function (auth_mint) haumea: Have you kept this in mind? https://github.com/darkrenaissance/darkfi/blob/master/src/sdk/src/crypto/token_id.rs#L29-L32 (It mentions contract IDs, but the same applies) yes, it doesn't change that since the token ID is still a point btw i'm renaming coin attribute serial to coin_blind, since it's no longer used in the nullifier and its sole purpose is to blind the coin ok, ACK on all I prefer to have a pubkey as the authorisation, also with contracts Since that allows for threshold signatures pubkey still exists, nothing changes thanks All good : @zero pushed 1 commit to master: 0738b42b1c: money: change the coin_attribute serial to coin_blind. We no longer use the serial, and its sole purpose is to blind the coin. Also move it to the end, consistent with all bullas and commits used in darkfi core. ^ might want to git pull since commit touches many files : @zero pushed 1 commit to master: 85cbf1f152: drk: update for changes to serial renamed to coin_blind ^ upgrayedd: see the TODO in this commit. I wasn't sure whether to change it because it might break drk btw shouldn't wallet.sql be moved from src/contract/money/ to bin/drk/ ? : @zero pushed 1 commit to master: 5876e97d20: spec: update for changes to money coin attributes haumea: well I haven't started checking what trully works in dkr yet, it just compiles lol ah cool so i can change this and if it compiles, then all good? yy ty most of them need a rewrite anyway especially dao stuff, but thats yours todo XD ah yes : @zero pushed 1 commit to master: 42a6b92ec6: drk: move coin_blind to the correct position re .sql yeah they can probably be moved away from each contract along with theyr column definitions I will move them away no worries ok ty : @skoupidi pushed 1 commit to master: 6b74cebdd0: drk: moved contracts sql stuff from their client to drk directly... haumea: ^^ was wondering if you guys would have something like ENS (ethereum name service), then came across this https://darkrenaissance.github.io/darkfi/zero2darkfi/darkmap.html Title: darkmap - The DarkFi Book is that what this is? Like ENS but with actual a proper privacy overlay? deki: thats an example experiment, not in place, but pretty much yeah it matches namespaces ah nice, I like that it's a proper application of ZK : @skoupidi pushed 1 commit to master: 07b47fd521: contract/money/pow_reward: simplyfied call to use last block information directly from database overlay : @skoupidi pushed 1 commit to master: f64c4e5750: contract/*: renamed all slot references to block height : @skoupidi pushed 1 commit to master: e53ea14531: contract/money/tests/integration: fixed failing test due to erroneous block height haumea: here? hey yeah was just working in bed upgrayedd: ^ haume: yo douve day? :D anyway want to talk about time do you have some? ACTION laughs on his own puns yes i started working in bed everyday soience research shows it's optimal to enter flow state and indeed i'm getting an extra 1-2 hours in everyday noice soience so the question is, I saw that you are using days in DAO shouldn 't the more correct approach to use block height? and check that in the contract calls using get_verifying_block_height() so for example the proposal creator inputs N days to the client, and the client translates that to the corresponding block height after that timespan occurs, based on our block target it doesn't have to be perfect, as it will never actually be it's actually modulo BLOCKS_PER_DAY that window can be anything, like BLOCKS_PER_HOUR but your tx must confirm in that time or be considered invalid because the zk proof can only be calculated for that specific value that's why i use days, but i agree days can maybe be hours instead yeah but how you define time in that context? based on blockheight darkfi/src/contract/dao/src/lib.rs:98 ok so you added it to the zk proof directly as part of the proposal bulla yes to avoid leaking any info outside like when the proposal was created this is a kind of hack / trick but i could reduce it from 1 day to 1 hour windows or 8 hours the question here is how you protect from someone back-voting aka vote on a past proposal that has expired darkfi/src/contract/dao/src/entrypoint/vote.rs:84 the zk proof will not verify in DAO::vote() here is the actual check: darkfi/src/contract/dao/proof/dao-vote-main.zk:96 yy ok that what I was looking for so you are doing what I'm saying, aka using verifying block heigh inside the vm you jsut denominate it into days the problem here is, you know that we don't have same block time in each network right? you mean testnet vs mainnet? yy i assume not, so yes it's inaccurate it doesn't matter that much since proposal will just get offseted to the future in networks with smaller block time yep although days is maybe too big, but i think usually people use days anyway for DAOs imho days/etc should be a front/end thing the contract should use block height everywhere for its calculations so you create a proposal on height N with expiration on height M every vote on verifying height you don't need to convert to days internally that way you also don't care about block time as you asume that each client uses the one corresponding to the network it wants to use you cannot do exact height, it must be a window the window has to be big enough that tx is guaranteed to confirm in that time so i guess 1 hour is enough? height is a window how do you mean that window? if i create a proof, it has to be for a specific value yeah exactly so if i target my tx for block height N, i cannot guarantee it gets in block N exactly hence the window size, [N - w, N + w] thats for the proposal creation you can use a window there yes but the expiration time should be (N+w)+expiration_height it's for voting and any time related calc in a zk proof if i create a proof for N, you cannot pass N + 1 instead wait wait why do you need expiration to be a window? the expiration has a fixed time, but that time is only in windowed time, not block height say w = 4, then we have [0, 1, 2, 3], [4, 5, 6, 7], ... yeah, but why? it can just be 7 because lets say i want to vote and the block height is currently 4 i cannot guarantee i get in block 5, 6, or 7 but i can say i will most likely get in block 5, 6, or 7 yeah end? so it's easier to use windowed time, and say [4, 5, 6, 7] runtime will pass verifying slot to the proof where it will be <=7 so the vote will be valid yes but the zk proof will be invalid if i compute it for block height h = 5, but instead i get into 6 aha so its a zk computation thing yep ok gotcha gotcha gotcha it's a trick / hack / workaround I was probably not paying attention when you explained it lol but as i said maybe the window is too big, it could be 1 hour a day is fine I guess well np, we invent and use a lot of these tricks everywhere they are all undocumented (TM) i think we can do 90% of everything fully anon with these collection of tricks ok got 2 other things, unrelated to dao/votes wanna continue? certainly dao thing is clear now to me so the first thing is relatively easy lets say contracts for some fucking reason want to use a timestamp instead of height so how do we define blockchain timestamp? I say it should be last blocks timestamp is that assumption correct? yes I figured its ok, since we allow for some time drift in PoW context so as long as the contract uses a timestamp greated than that of the last block, it will be valid next? well it should respect a few rules like not too far in the future thats a different thing but yeah next yeah we are using the same timestamps checks for blocks but those are defined by the block miner I'm talking inside the vm, where for example you set a timestamp as a limit(similar to dao proposal expiration) that you want to use an exact timestamp instead of block height yeah originally i wanted to do that so there you would use blockchain_timestamp(), which will be the last blocks timestamp its not safe, block height is much safer but we can provide it in the runtime api ok yeah true ok so the next thing is you remember how an epoch is defined in PoW and PoS contexts? some fixed number of blocks? in PoW is fixed number of blocks, in PoS is fixed number of slots I'm asking how that fixed number is defined in each case (I can explain if you don't remember) ah no sorry i don't know will start with PoS so in PoS we devide time in fixed intervals, called slots these are used by the consensus, to define when certain actions can occur so an epoch in that context, is a predefined/fixed number of slots, defining a certain period of actions its usually used to define when validators can enter/exit(aka stake unstake) into the network ok makes sense so you prevent flooding and have network stability so in that context, epoch is purely based on your slots now in PoW, we don't have the time divition, as you don't really care about time in an asynchronous enironment synchronous means everyone does actions on certain timeframes asyncrhonous means nobodoy cares about time, you react after something happens ahh cool so in PoW, since we are in asynchronous environment, we define epoch using different metrics in BTC, is halving periods yep so each blocks epoch, is based on its reward, which is derived on fixed height intervals where we halven the reward why not just say it's based on fixed height intervals? which reward is also based on so you have: genesis block epoch 0, blocks 1 - 10000(where reward is 50BTC) epoch 1, blocks 10001 - 20000(where reward is 25BTC) epoch 2- ..... yeah its the same thing ok yeah got it now the juice/question is since in PoW we don't use slot, we should also change our epoch definion, correct? aren't the definitions equivalent since PoS has fixed time slots, and each slot has a fixed time? > in PoW is fixed number of blocks, in PoS is fixed number of slots no its not since a block can include >=1 slots for example leader for that slot was offline, or in our case nobody won the lottery so no block was produced for that slot, hence the descrepancy so for example if that slot is on an PoS epoch change, the blocks epoch will be different, than if it was on PoW, since more slots have passed than blocks ok well the epoch is defined in terms of some fixed constantly increasing event in PoS you have slots which offer that granularity, but blocks are unreliable but in PoW, blocks are constantly occurring and they are a reliable unit of time yy thats all correct, I'm just saying that the definition/usage of our current epoch should be changed to match that right now we use the PoS definition, aka elapsed time since genesis % slot time which is wrong we have to change it to block_height % blocks in an epoch or to be even more precice, based it again on the rewards halvening intervals yep makes sense since that halvening can be fixed along all networks, so you don't care about target block time less block time will simply mean rewards get halvened faster do we even have halvening? thats a different discussion lol why do we need an epoch? ah to readjust difficulty ofc in reality, with randomx we don't need epoch since difficulty is based on block production aka timestamps how often is the target difficulty changed? epoch is used in btc for difficulty yeah in randomx it always adjust on each block based on the median of previous 720 ones so you always target the block time target (90 seconds in mainnet config) you don't need epoch in that context since you can always derive rewards from just block height ah interesting the reason bitcoin has the difficulty adjustment periodically is because of fears over the difficulty spiking or weird other effects check src/validator/pow.rs we have all the logic/defintions there but i guess it's an unfounded fear? well yeah you need that, since iyrc in btc you know the network hashrate in randomx you don't need that it's not to do with knowing the hashrate, it's that a big mining pool could mine loads then shut down halting all block production it's a kind of DoS attack yeah that what I mean by knowing network hashrate spikes/descrepancies based on how many nodes are mining the same thing can happen in randomx but for a single block the difficulty readjustment period in btc is to smooth out the curve making it very difficult https://en.wikipedia.org/wiki/Moving_average#/media/File:Lissage_sinus_bruite_moyenne_glissante.svg Title: Moving average - Wikipedia this is just a moving average graph so the join, and since the difficulty is low compared to their power, the produce a lot of blocks quickly, rising the difficulty for everyone else although btc uses the median instead of mean then they leave, so next block will get more time to get produced yeah then it readjusts, they repeat the attack again but algo will adjust quickly back to original difficulty nobody can mine it's just a DoS attack its not true, they mine so as far as network is concerned, blocks are getting produced its more like not letting other nodes grab rewards but thats an issue in PoW entirely, hardware monopoly actually it's not about rewards, it's more like they don't mine until difficulty readjusts then they grab the block and push it back up again they just mine empty blocks nobody can use the network aha yeah thats different tho we can actually protect that using MEV tactics well not if nobody else can mine what I'm saying is: we can for example dissallow empty blocks or another thing is to have reward propotional to block work/fees so the epoch/block height reward is a max cap, not a given but i'm not too bothered about this "attack" since it's never happened in practice yet, would be unprofitable for the attacker and not really do much except deny txs until attacker finishes you cannot disallow empty blocks why? i would not make reward dependent on fees in a block, it's a bad idea the miner can make fake txs to himself oh true you are right unless the tx uses fee burning, in which case they lose money reward r must be strictly less than the fees f1, .., fn, so r < sum(f) (if you make reward dependent on fees in a block) we can do that so the block reward is a max cap, based on epoch/block_height halvening rules, and strictly less than the fees that way we can also deny empty blocks, if fee burning is not in place yet i can't comment on the economics of miners not collecting the full fee from txs that seems like a kind of fee burning no its not block reward is different from collected fees you just don't get the full reward if fees are not over that thresshold ok the fee is "discarded" then wdym discarded? in what sense anyway thats a different conversation, lets not distract ourselves so for epoch, are we you ok to change the definition based on blocks intervals, not slots? gm hackers upgrayedd: yes that makes sense (was eating lunch) ACTION : afk cya : @skoupidi pushed 2 commits to master: 2f5de8e999: runtime: removed slot related fns and added gas cost to util fns : @skoupidi pushed 2 commits to master: 3355575721: runtime: replaced timekeeper with verifying block height directly haumea: glhf : @skoupidi pushed 2 commits to master: ccc3a8e3a7: blockchain/header: derive block version using sdk block_version fn : @skoupidi pushed 2 commits to master: 4c45c8d592: validator: cleaned up verification and validations methods and merged the two files : @skoupidi pushed 1 commit to master: 3e15d146a3: validator/consensus: cleaned up slots logic : @skoupidi pushed 1 commit to master: 2805f1435c: validator: fixed minor encoding/decoding bugs brb : @draoi pushed 2 commits to master: 79512e6dce: chore: fix debugs and cleanup : @draoi pushed 2 commits to master: ccc6b25754: inbound_session: implement PingSelfProcess... hey brawndo, i don't think we need FuncIds map actually since the main use case is to use a func_id inside ZK, we can define func_id = hash(contract_id || code), which can be computed inside wasm so to check the parent caller matches the func_id inside ZK, you just calc the parent caller's func_id and pass that in i think this is the most simplest way for our use case... will think a bit more on this Sounds good !topic DRK vesting and mining contract mechanics Added topic: DRK vesting and mining contract mechanics (by haumea) lol i found a critical bug tx/mod.rs in pub fn verify_sigs(&self, pub_table: Vec>) -> Result<()> { assert!(pub_table.len() == self.signatures.len()); but pub_table is of type Vec>, and it doesn't check the inner vec sizes so you can just attach no signatures and it will pass if you have a money::transfer() with 3 outputs, i can replace the outputs with my own ones and just skip the signatures and it will pass 0_0 X_X O.o I'll assume you're on it :) sure thing interestingly the effects of this attack are highly mitigated by the anonymity of transfers because you don't know the value commits to fake them .etc so maybe in some ways, anon cryptosystems are less hackable : @zero pushed 1 commit to master: 7edb0cd217: apply DEP 0003: Token Mint Authorization : @zero pushed 1 commit to master: c99186cc73: critical bug: verify_sigs() should also check inner length of vecs haumea: how is DRK vesting and mining rewards related? : @skoupidi pushed 1 commit to master: f79bd2de18: sdk, blockchain, validator: removed slot stuff about the contract mechanics haumea: elaborate more? upgrayedd: my last commit enables token minting contracts so i want to understand how coinbase token minting and vesting could work on chain anyway going to eat dinner/sleep routine now, gn haumea: aha so its informative discussion, ok gotcha gotcha gotcha gn hf yep exactly ok gn cya tmrw upgrayedd: remember we made that change to the refinery logic so that it only refines nodes that match our transports i just noticed that the default transports are an empty vec, so if a user doesn't configure the transports it effectively disables the refinery should we a) force ppl to configure transports or b) add some logic like if there's no transports configured, try to refine all the nodes? s/nodes/hosts option a) is probably better for the network option b) assumes nodes that do not configure transports can support "all" the transports, and then through trial and error will basically delete transports that the node does not support, resulting in those transports not being propagated from that node to the network If they're unconfigured, you could default just to tcp+tls nym is not working well anyway ++ hey yesterday i ran make test for money fixes, but overnight ran make test in repo root... some stuff broke in drk so fixing that : @zero pushed 1 commit to master: 3fc7ba6e2c: fix broken drk compile due to money token call changes turns out we were never creating sigs for clear inputs with yday fix, now it fails... fixing lol such a good find : @zero pushed 1 commit to master: eb635dc3df: fix failing DAO test recently i recloned for codeberg and just realized my cargo fmt commit hook was missing: https://darkrenaissance.github.io/darkfi/dev/dev.html#cargo-fmt-pre-commit-hook Title: Development - The DarkFi Book fyi for others in case they also did the same darkfi/src/runtime/vm_runtime.rs:388 _ => unreachable!("Got unexpected result return value: {:?}", ret), can this be triggered in wasm by crafting a bad contract? oh i guess it's not possible, nvm : @zero pushed 1 commit to master: 9d33a10a0b: use proper FuncIds to ref contract funcs such as spend_hook in money : @draoi pushed 6 commits to master: e2a47f99db: dnet: fix bug that caused lilith info not to be displayed : @draoi pushed 6 commits to master: 716af31848: dnet+lilith: enable hostlist debugging... : @draoi pushed 6 commits to master: ba8edefb59: net: restructure hostlist removal logic... : @draoi pushed 6 commits to master: b1d16e1153: inbound_session: don't stop tasks that haven't been started... : @draoi pushed 1 commit to master: 821ba475d7: contrib: add TODO : @zero pushed 1 commit to master: 71ec264e1d: fix drk broken by last commit : @zero pushed 1 commit to master: 62234a8f88: replace info level for test-harness : @draoi pushed 3 commits to master: 183717f04e: script/lilith_spawns: update to new hostlist usage : @draoi pushed 3 commits to master: 1251ad3b38: refinery: don't refine hosts that we are connected to/ trying to connect to... : @draoi pushed 3 commits to master: 488b9c8d98: protocol_address: tweak response to receive getaddr... net code seems to be working now dasman: want to try deploying some liliths? :) !list Topics: 1. DRK vesting and mining contract mechanics (by haumea) draoi: done seed is "tcp+tls://dasman.xyz:5262" peer is "tcp+tls://dasman.xyz:2654" awesome ty No problem at all :) sweet, spun up a quick local node and synced event graph/ found ur peer instantly it's outbound only tho for now will spin up some bidirectional nodes on the servers Sweet I see your msgs same! :DD biab how does merge-mining work? : test : echo 1 2 3 : test : hey dasman you on here? : i see some msgs but idk when from : heyheyhey : :D : i sent only two msgs when i spun up these nodes : draoi: ^ : yep i see them : haha niiice : maybe time to redeploy the bridge : ah mirror already works on old darkirc net : still works idk how test back okay that was not intended XD : test test back : werks : yo : :) : :D : oh wait, commit bot should mirror but hasn't : draoi: logs look so clean, I'm very happy XD : @dasman pushed 1 commit to master: d2844e43f0: src/net: cargo fmt and clippy needless_borrow fix : gm : that's awesome https://dep-tree-explorer.vercel.app/api?repo=https%3A%2F%2Fgithub.com%2Fdarkrenaissance%2Fdarkfi&entrypoint=src%2Flib.rs Title: Dep Tree !topic darkirc migration Added topic: darkirc migration (by haumea) is there a reason why Halo2 was chosen for the ZK proofs? As opposed to say circom? this MIT course I've been doing about ZK uses circom https://zkiap.com/ Title: [MIT IAP 2023] Modern Zero Knowledge Cryptography I will bbl : gm : nice my darkirc instance has been running without failure for days hey deki: circom is javascript and plonk trusted setup whereas halo2 is no trusted setup draoi, dasman: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/final-printversion-10-5-14.pdf Title: Microsoft Corporation see section 2.2 on Event Graphs it's worth keeping this book around and studying it if possible it has a lot of info for architecting our p2p subsysten subsystem hey btw ppl are travelling today can we move dev meet to 2m? sure haumea: ty : test test back woah based self-healing connection haumea: I'm guessing I should just skip circom then? Focus on halo2, because by the looks of it, it's using Rust anyway *implemented in Rust in case anyone missed this: fyi the meeting has been moved to 2m so 24hrs and 15m from now gm !topic configurable generators Added topic: configurable generators (by haumea) gm !list Topics: 1. DRK vesting and mining contract mechanics (by haumea) 2. darkirc migration (by haumea) 3. configurable generators (by haumea) : gm my darkirc stopped working errorist: are you on latest master? any other info? like in the debug output etc I did git pull and make darkirc but can't connect to the seeds listed in the config https://pastebin.com/WYq0BRDf Title: 10:04:13 [WARN] error: tor: remote stream error: Protocol error while launching - Pastebin.com we are not using the default seeds rn just this one: seeds=["tcp+tls://dasman.xyz:5262"] ok, let me try with it not sure this seed supports tor you might find tor addresses to connect to from the seed but not sure how it will behave if your transport is set to tor only try it anyway :) : gm : hey the seed does not support tor hmm, tried tcp only and it says invalid data: https://pastebin.com/eQCWzYD2 Title: 10:23:09 [ERROR] [net::tls] Failed verifying server certificate signature: inval - Pastebin.com the seed can support tor, but i guess dasman means it's not yet setup with tor correct, haven't set up for tor errorist: set this: allowed_transports = ["tcp+tls"] trying to recompile from repo now but getting this: error[E0635]: unknown feature `stdsimd` dasman: tried, but didn't work so I removed the darkfi dir and cloned from repo, will recompile now bbl !topic not no entrypoint feature Added topic: not no entrypoint feature (by haumea) fyi latest nightly is broken af ok thought I fucked up something :D nope, thats the drawback of living in the cutting edge errorist: I can help you if you want to build using older version it's ok, i'll wait for them to fix the nightly : @skoupidi pushed 1 commit to master: c8b726e0ca: chore: clippy : @skoupidi pushed 1 commit to master: 204cfcc25d: chore: fmt : @skoupidi pushed 1 commit to master: b78d96a508: contract/money/client: removed obselete sql references : @skoupidi pushed 1 commit to master: 7ac561fe67: util/time: TimeKeeper yeeted greetz : yo : o/ !list Topics: 1. DRK vesting and mining contract mechanics (by haumea) 2. darkirc migration (by haumea) 3. configurable generators (by haumea) 4. not no entrypoint feature (by haumea) not sure if brawndo is arround upgrayedd: around? yy here we can start then !start Meeting started Topics: 1. DRK vesting and mining contract mechanics (by haumea) 2. darkirc migration (by haumea) 3. configurable generators (by haumea) 4. not no entrypoint feature (by haumea) Current topic: DRK vesting and mining contract mechanics (by haumea) i'm here but the first point i might need feedback from brawndo on well i'm curious how minting new DRK works in the contracts i saw there's a call "genesis mint" we can postpone this topic till nxt week? haumea: new DRK is based on PoWReward yeah sure i'll re-add which is the tx call the miner attach to their block in order to get the reward+fees why is it called "genesis" mint? genesis mint is a different thing, where it enables us to create tokens on genesis block aka premint genesis means block 0 its 2 different calls aha ok so we can delete it then? why delete it? why do we need a separate call? isn't it PoS related? no its not PoS related genesis block doesn't have a block producer surely we need a vesting contract? so that call is used to generate preminted stuff well yeah a vesting contract is needed, it will still need that call to mint everything and chug it into it it's no longer needed, we can delete it, then use Money::token_mint() but i'll wait until brawndo is around to talk with him about it token mint is a generic one for arbitary tokens minting yes genesis_mint is hyper specific to native DRK tokens which has no mint autority i don't see why it needs a special call token_mint() no longer has a mint authority oh you should have started with that lol it instead has a reference to another contract function with the rules for minting ah yeah sry so we would make a vesting contract where you can claim tokens according to the schedule ok yeah then we can discuss with brawndo on the correct approach to this keep it in for now ++ and we yeet it once the proper config is created yep sounds good where will tokens vest, on darkfi or eth mainnet or both? errorist: TBD ok, thx well some part must be on darkfi like dev fund and community ecosystem pools makes sense (not my area) !next Elapsed time: 8.3 min Current topic: darkirc migration (by haumea) should we do this or what's the timeline? when should we announce we should test first ok i will deploy my nodes ty agreed, we need more people using darkirc it's been working well so far, but rn it's just me and dasman afaik my connection is stable so not the best test i could also try mobile too see if it remains connected i'll try that and switching between networks https://darkrenaissance.github.io/darkfi/misc/ircd/ircd.html#installation-android Title: ircd - The DarkFi Book ^ also be good if others try android builds !next Elapsed time: 2.1 min Current topic: configurable generators (by haumea) !next Elapsed time: 0.1 min Current topic: not no entrypoint feature (by haumea) !next Elapsed time: 0.0 min No further topics :D !end Elapsed time: 0.2 min Meeting ended XD ok thanks a lot everyone !topic DRK vesting and mining contract mechanics Added topic: DRK vesting and mining contract mechanics (by haumea) !topic configurable generators Added topic: configurable generators (by haumea) !topic not no entrypoint feature Added topic: not no entrypoint feature (by haumea) haha thanks a lot everyone ty all <3 <3 +1 errorist: want to run a seed node btw? haumea: sure, for darkirc? yeah it's generic, see the docs for lilith I was running one tor node before the old darkirc went down can do both tor and normal node ideal if it supports all transports i.e. tor, tcp, tls that way it serves as a bridge will checkout the docs and let you know if I have some questions also running darkirc with tor *and* tcp / tls is bridging the networks / making them more resilient since clients using only tor can reach the clearnet nodes through them as well neat : @zero pushed 3 commits to master: 8b3dee989d: DAO::exec(): add missing signature : @zero pushed 3 commits to master: 35d9f11941: update VKS hashes for test harness : @zero pushed 1 commit to master: 603e917763: DAO: switch to 4 hour windows instead of days upgrayedd: i switched to 4 hours instead of 24 hours ^ if the tx doesn't confirm within 4 hours, then you must retry it again : @zero pushed 1 commit to master: 4e92dafafe: remove dao- prefix from dao proof filenames haumea: noice : @zero pushed 2 commits to master: 0b9c3eff5f: contracts: cosmetic cleaning up of proofs by prefixing attributes in bulla (kinda like is popular for postgres attributes in a table) : @zero pushed 2 commits to master: 25cae60aae: remove Dao prefix from proof names greets good evening from down under howdy !list Topics: 1. DRK vesting and mining contract mechanics (by haumea) 2. configurable generators (by haumea) 3. not no entrypoint feature (by haumea) gm lmk when i can start spamming you re: topics Lemme finish breakfast yeah be relaxed : @zero pushed 1 commit to master: d79e3542eb: book: add a START HERE guide since people no longer can to read properly (internet coom breins) and i can not to write properly haumea: Here brawndo: so in halo2 lib inside halo2_gadgets/src/ecc/chip/constants.rs there is a function called: pub fn find_zs_and_us( base: C, num_windows: usize, ) -> Option> { this enables calculating the Z and U values used to add the generator constants, so we can actually make zkas have configurable generators Yeah correct should we add a call inside wasm to enable adding a new generator constant for use in zkas? What does wasm have to do with it? zkas and wasm are independent of each other yes but the contract wants to use a particular constant so how are they configured? since calling this func has some overhead, so you want to precompute it and save the result we could also cache it in a lookup table actually maybe that's best, done directly by the zkvm (x, y) -> (z, u) values lazy init https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L625-L646 The thing is they have to be compiled in https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L627 they don't have to let vcr = ConstBaseFieldElement::value_commit_r(); Yeah that's a function yeah check it out darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:121 sec Yeah so how would you be adding an arbitrary one that the zkvm has no knowledge of? We can't make the lib depend on non-native contracts a point is specified solely by pallas::Affine, and then you use find_zs_and_us() to calculate the pre-computed u and z values from G So you want to extend the constants syntax of the zkas lang? when loading a point for use with halo2 lib, it just needs an object with methods to access those values pretty much, or enable creating custom ones from init() inside the wasm contract I think I prefer keeping them disconnected well an alternative way is in zkas, you can specify the (x, y) coord for a const, and it maintains a cache Probably no need to cache In the constant section then we can have assignments allowed well calling find_zs_and_us() has a cost, that's why they're precomputed ahead of time e.g. EcFixedPoint MY_GENERATOR = 0xabc, 0xdef ++ but just saying if a const is used a lot then you would want to cache it other it's redundantly computed every time for example you might use a generator in multiple proofs in a single contract It's a tradeoff, I don't know how much complexity it involves And I don't know how slow that function is i mean it's doing sqrts() in a finite field which is a slow arithmetic func, but maybe not too big a deal in the grand scheme of things https://github.com/darkrenaissance/darkfi/blob/master/src/zkas/parser.rs#L313 This is where you'd have to modify the parser and go from there we could add it for now without the cache and then later we can add a cache if needed Sounds good yeah ok great I need to focus on finishing the tx fee stuff so it's ready for audit yeah understandable, this is not urgent just was pretty happy to figure out the consts are configurable since we overuse generators too much secondly, about the genesis_mint(), money function... i understand this pre-mints a certain alloc of drk on the genesis block Correct but now the token_mint() allows specifying rules for how a new token is minted, so we can actually create a proper vesting contract for on chain DRK Yes but token_mint cannot mint native DRK Have you read how token_id are generated? It's in src/sdk/crypto/token_id.rs on top it's changed, and TokenId will be moved into money/src/model.rs https://darkrenaissance.github.io/darkfi/dep/0003.html Title: DEP 0003: Token Mint Authorization (accepted) - The DarkFi Book The point is that there is no secret key possible for native DRK you don't need a secret key And DRK should only be minted by block rewards Nothing else the token_id for DRK contains a reference to the vesting contract then the vesting contract allows claiming DRK according to the vesting schedule when you try to call money::token_mint(), then token_mint() will make sure it's invoked by the vesting contract which has its rules for who can claim what and when Vesting is not supposed to work this way Legally for vesting all tokens have to be minted already So you need to have a pool of tokens existing how does that work with a non-fixed supply? Irrelevant The tokens have to exist ok but lets say these tokens are minted, then how can they be claimed if they're owned by a secret key? is there really such an arbitrary legal restriction? it's much easier to make a special contract to put all the rules there There is I'm sure there's a way to just do a musig2 scheme or frost between the contract and the vestee ok well if you still want to pre-mint the coins ahead of time, then it's still possible using the spend_hook basically the genesis_mint() is removed and we create another contract, then call token_mint() to pre-mint the entire supply with the reference to the other function which specifies all the coins must have the spend_hook set for a function in that contract the secret key inside the coin is unused, but when trying to spend the coin, you must satisfy the rules of that special function we don't need a special genesis_mint() function, and can use the generic token_mint() one. Any DRK specific logic can be put in a dedicated function, ideally a separate contract. okay So token_mint is getting changed to just be able to mint _any_ token (including native)? Then the authorisation is on a separate layer? Where arbitrary tokens have to produce a signature whose pubkey also derives the token ID And native token can have some other ruleset? yes correct we have auth_token_mint() which is the old ruleset LGTM nice, just trying to figure it out ok last question then won't bother you more what is not(no-entrypoint) feature for? why not just "entrypoint"? is this documented somewhere? The entrypoint is wasm-specific and is a "default" feature because it's for a contract You enable "no-entrypoint" when you want to use that contract in a client API, or more important - as a dependency of another contract It's essentially the "main()" function oh ic ok thanks about DRK token: i will create a skeleton sometime this month we can play with Sounds cool any workarounds we can add after depending on what we need After the fee stuff I can help out on the zkas constants stuff even the ecosystem DAO and which pools it vests into can be controlled by DAO voting since we have generic function calls by the DAO now Yeah no rush, it's not urgent, but just very good to have before mainnet ++ ok ty yw Zcash Sustainability Fund ZIP plans to replace the coin issuance mechanism per block by mining the remaining issuance in a ZSF_BALANCE field and then paying the miners, stakers, dev orgs from it https://github.com/zcash/zips/pull/703/files?short_path=1f254a6#diff-1f254a6f698253908c3b69847e4e5df039d8b74200b065d71399da672ee23cce Title: Zcash Sustainability Fund by tomekpiotrowski · Pull Request #703 · zcash/zips · GitHub : @skoupidi pushed 1 commit to master: 2dc6832656: validator: removed genesis tx total calculation | darkfid: use next block height for contracts deployment afk, bbl cya cya https://darkrenaissance.github.io/darkfi/learn/dchat/deployment/deploy.html?highlight=lilith#deploying-a-local-network Title: Deploy - The DarkFi Book is this guide uptodate or I should use [network."darkirc_v4"]? draoi: ^^ : @skoupidi pushed 1 commit to master: 3547539c3f: darkfid/tests: properly test sync logic by generating next blocks using fixed mining difficulty errorist: that is a guide for a toy/ demo app we have called dchat so it's only intended to explain some concepts about the net code [network."darkirc_v4"] would be the right naming for darkirc configuration tho tbh the network name doesn't matter too much, more important is the accept_addr and other config fields https://hackmd.io/0QEWu7qYR8-LGWvUXoxxOQ Title: How to handle circuit upgrades - HackMD b draoi: ok, thx. what about the whitelist file? would be cool if someone can share a sample config, then I can try spinning up some nodes tomorrow errorist: the hostlist path is just wherever you want to store the hostlist, e.g. /.local/darkfi/lilith/darkirc_hostlist.tsv . Hello, I was interested in learning, I read the learning part of the documentation and I got to work with Python and I read Abdullah's book (I loved it, but I'll leave that to the philosophy section). I had tried programming in the past (also with python) and failed, could you give me any tips that you found useful? coda: if you're new to programming, probably best to learn some theory behind it all, such as data structures, data types, algorithms etc no need to go too deep. Look up CS50 on YouTube, it's an introductory coding course at Harvard, goes over everything depends where your skills set are at, if you do know another programming language, then best bet is to get into Rust, which is what DarkFi is built on gm gamers gm gm coda: https://darkrenaissance.github.io/darkfi/dev/learn.html Title: Learn - The DarkFi Book see the python book there also hang out on libera IRC #python channel gm brawndo: maybe we should only allow clear_inputs for the first tx in a block? (and not for unconfirmed txs) and a single clear input Reasoning? what do we need them for apart from coin generation events? i guess minting new coins (of other tokens, so it could be restricted to DRK) draoi: i'm stuck :< 09:45:03 [ERROR] [P2P] Error starting listener #0: IO error: address not available asked dasman, maybe he can help Token minting has hidden supply I'm not sure what the coinbase tx uses errorist: sounds like your accept_addrs is misconfigured The coinbase tx is the last one in a block btw ok, doesn't the coinbase use transparent inputs? It's the optimal place since then the miner doesn't have to rebuild the tx merkle tree in the block, they can just append it coinbase has no inputs draoi: dasman told me to use public/resolvable address, and I tried with my public IP/domain but getting this error. I can ping the hostname from my machine and it's getting resolved Just an output of known value ah no but for auditable supply, surely we want a transparent input It does use a clear input, sorry ok cool I just misread something (Have 5 vims open lol) idk if we want to restrict usage for DRK token to a single clear input in coinbase tx anyway just an idea Don't see the need The contract model already only allows 1 input and 1 output in coinbase Otherwise it doesn't even deserialise correctly MoneyPoWRewardParamsV1 ahhh ok it's a different call Yeah forget about transfer lmao darkfi/src/contract/money/src/entrypoint/transfer_v1.rs:170 can i disable this check then, and make it fail if token_id = DRK? Transfer doesn't need clear inputs at all They were just used for faucet stuff yeah true delete them I'd just remove it honestly sounds good we have token_mint which supports optional clear mint Does it? (so the issuer of the token can decide) yeah The issuer can just reveal the value blind well it must be mandatory for all mints No we're minting hidden supply now if i make a token with transparent supply, then anyone who mints token through a smart contract must follow the rules it's decided by the token author Yeah well doesn't have to be part of the protocol yeah it's not, you supply the rules through a smart contract function the default is just the same as before ++ ok i'll remove clear inputs then yay deleting code we should have incentive/reward system for commits which reduce overall LoC :D PR to rewrite darkfi in python wen haskell darkfi in 10 SLOC brawndo: coinbase tx(aka block producer) is in first position of the blocks txs vector, not last Didn't we come to terms that it'd be last? yeah, but we changed it later All other txs have to be verified before And then you can verify coinbase and the fee reward The block building logic is also a lot simpler since you can have a single loop https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/verification.rs#L192-L207 Title: darkfi/src/validator/verification.rs at master - darkrenaissance/darkfi - Codeberg.org just swap the order there and it should be g2g Wjere Where's the block building? producer transaction uses different verification than normal txs https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/consensus.rs#L74-L126 here I guess Title: darkfi/src/validator/consensus.rs at master - darkrenaissance/darkfi - Codeberg.org this should be push_front tho XD https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/consensus.rs#L86 Title: darkfi/src/validator/consensus.rs at master - darkrenaissance/darkfi - Codeberg.org btw I don't see how/where we use the tree in the header perhaps impl missing? sec header_store.rs Line 51 As you're adding txs to the block you need to append them into that tree https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/blockchain/block_store.rs#L120-L130 Title: darkfi/src/blockchain/block_store.rs at master - darkrenaissance/darkfi - Codeberg.org yeah its there should we also verify that the tree matches the appended txs? Of course ok, can we decide on coinbase/producer tx position in the vec? last or first? (so I can do both right away) Last because it'll be easier for block building Since you need to append all txs, gather their fees, and then build the coinbase tx So you can simply append coinbase to the Merkle tree Otherwise you have to rebuild the tree And there's no point in hacking the verification order when this is just fine kk changing it now Noice wait the verification order should be first coinbase, then rest txs? No the opposite Because you need to gather fees kk first txs then coinbase and for tree verification of the header, we just create a new tree appending txs hashes and checking header.tree == tree correct? Yeah you can do it as you're verifying them And when you're done with the txs, you'll have the tree so you can compare it with the block header true true, so verify txs returns the tree and then you also append the coinbase tx Yep gg izi It's straightforward luckily Add a placeholder comment for the fees stuff Also there should probably be a block gas limit well right now we have a txs len limight so that should be replaced by the gas one ah well I guess that one is fine too https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/consensus.rs#L76 Title: darkfi/src/validator/consensus.rs at master - darkrenaissance/darkfi - Codeberg.org I just didn't see it enforced here https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/consensus.rs#L381-L384 Title: darkfi/src/validator/consensus.rs at master - darkrenaissance/darkfi - Codeberg.org its enforced when we grab the uproposed_txs ehh, fix that :D fix? That can allocate the entire mempool twice Should be picking one by one from sled until you fill an array Or you can take() TXS_CAP From the iter or we can jast grab all already proposed txs hashes, and create a sled fn: grab_txs_not_in(txs, limit) You have an iter already https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/consensus.rs#L368 Title: darkfi/src/validator/consensus.rs at master - darkrenaissance/darkfi - Codeberg.org kk will figure it out dasman: how is your seed node looking rn? i can see the port is open but getting this: [WARN] net::session::seedsync_session: [P2P] Failure during sync seed session #0 [tcp+tls://dasman.xyz:5262]: Channel stopped : @zero pushed 1 commit to master: f74817ba8e: move TokenId from SDK to money contract Why Pls revert this : @zero pushed 1 commit to master: 3e6e54121d: move nullifier from SDK to money contract lol its unstopable omg why ffs stop haumea: Revert these please ^ why? SDK is the correct place for it As a bonus you broke the python lib no it's not, TokenId and Nullifier are specific to money, they aren't global data structures i can fix the python lib anything that's broken i can fix Do not assume that They can be used anywhere yes they can be used by importing darkfi_money_contract but they are specific to money everything else is stuff like: public key, schnorr sig, contract ID python bindings build fine draoi: the seed is healthy, I can connect to it no problem oh weird tho I get a warning about white-list being empty seed says that ^ that's not ideal as it means that your node is not sharing other nodes (it shares what's on the whitelist) i stopped some of my nodes so maybe that's why but if your peer is still up it shouldn't happen aha I see peer still running i think the peer may have gone into the anchorlist could you stop the lilith for a sec and inspect the hostlist? should I restart lilith? yy cool lmk if the peer is in anchorlist how? ah cat the hostlist? yeah although it shouldn't matter if it's on the anchorlist i'm realizing. cos protocoladdr sends anchorlist peers as well shows only this: greylist tcp+tls://anon-fore.st:26661 1707389374 so it's lost track of ur peer somehow i saw a couple of my nodes rejecting your peer with connection refused error but why? i'm not malicious :( maybe you reached max inbound connection count hah maybe i'll set a higher number i think it was 8 or something i would try running it with dnet yeah put 20 or something ++ i think default is 20 lmk when seed is up again i will try to reconnect ah wait Peter's hostlist has xeno.tools in anchorlist Peter's, peer's who is peter haha ok that's fine XD what's the time stamp? 1707389456 seems fine they are 2 nodes i deployed this AM s/deployed/redeployed alright everything is set ok reconnecting : test test back do you guys need more node operators to verify their stability? I don't mind leaving my PC on, I've currently had it running for 5 days and haven't had connection issues with ircd : hiiii deki: yes that would be great we are testing darkirc on latest master deki: ircd is pretty stable we're testing darkirc seeds=["tcp+tls://dasman.xyz:5262"] ^ for darkirc test okay noice, I'll try to set it up within the next hour before I go to bed (past 10pm here) thanks ty, if you can set an external addr that would be best (to accept inbound connections) but not necessary if you don't have ipv6 or a hostname you can use deki: you should realy stop advertising your timezone... haumea: Is there a reason why spend_hook is gone from the Input struct? haha why? I'm probably the only person from australia here >.> or your country.... draoi: yeah I don't have ipv6 btw its like security opsec 101 upgrayedd: okay fair, you're right haumea: It seems insecure what you're doing brawndo: we don't need it in the input struct haumea: Assuming that a transaction is built honestly there's no assumption You're checking for FuncId by looking if there are parent calls in the tx the zk proof will fail, it's actually stricter this way than before How come? ah it's still in the metadata yep it's more direct this way rather than checking spend_hook is set, and so on Yeah ok and the code is cleaner, less room for errors If you don't mind, you could add more stuff in the commit message to make it clearer (this is the breakthrough btw why we don't need the func_id hashmap) ok apologies 9d33a10a0bdee958c3d86acc7e0f74034e2b9a52 relevant commit yep yep btw i've become a big fan of using types for everything. it's very easy to mix up 2 values in a bulla/commit when using pallas::Base everywhere but impossible when the fields are strongly typed, which also makes changing them much easier ++ ACTION adding a Blind type now haha rebasing will take me days lol but nw ah yeah maybe i should have given a warning, i didn't realize how big it would be Go for it, nw sec lemme just give you a patch haumea: https://termbin.com/b6y5 Add this while you're at it This is also less error-prone deki: you can try setting up a tor darkirc node https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html Title: tor-darkirc - The DarkFi Book brawndo: ok will do thanks ty darkfi/src/contract/money/src/client/mod.rs:79 what do you think of this TODO? should we delete this or is it used anywhere? draoi: thanks, on it now haumea: Yeah I dunno, you added it, but I think swaps are likely to use it Since the parties need to verify the values Review the client side of the atomic swaps ok thanks https://github.com/darkrenaissance/darkfi/blob/master/bin/drk/src/swap.rs There's init_swap() and join_swap() The former is when you're doing the first part, and the latter is for the second half understood where do I find the config file after running ircd? Trying to add the inbound and external_addr is it bin/darkirc/darkic_config.toml? https://darkrenaissance.github.io/darkfi/misc/ircd/ircd.html#usage-darkfi-network Title: ircd - The DarkFi Book ah right, thanks deki: that's ircd tho darkirc is a different app you need to run 'make darkirc' on master I was doing the first step for setting up the tor enabled darkirc node, had to install Tor and launch the hidden service ++ just ran "make BINS="darkirc" and it's building now, is that ok? yes ok sweet, btw I dont need to run another ./ircd? That's what ./darkirc will be instead? yes just keep this node you are talking w now running and run darkirc additionally okay understood are you using weechat? as the client yes you will need to run a second weechat weechat --dir [path_to_second_weechat] ah okay, I'm up to configure network settings now btw the path is wherever your weechat data is stored biab : @skoupidi pushed 1 commit to master: 2ce7f38880: validator: producer tx on last instead of first position in block txs vector and proper header Merkle tree validation draoi: any idea what this error could mean after running ./darkirc: Error: Io(AddrInUse) pgrep darkirc that didn't return anything, should it show processes? well your error means address in use, so there's a port being occupied by something else it should show the port in the output though the part in configure network settings was asking for youraddress.onion:your-port, I used the port number 25551 from the tor setup, should I just use some other random one? i'll let someone else answer that q idk I will play around with it meanwhile : @zero pushed 1 commit to master: 2094274851: add a Blind type to the SDK, which is used in all bullas as the explicit blinding factor. this should be the last of the big commits deki: i'm not entirely sure what you mean but AddrInUse could refer to e.g. irc listen addr maybe you are using the same addrs on both ircd and darkirc the rpc and irc listen ports must be unique to that node ah I see you're right, it's due to using the same addrs, I exited out of ./ircd but now I get [EVENTGRAPH] Syncing DAG from 0 peers, the nSync: Could not find any DAG tips, failed syncing DAG then exits after a few attemps : @skoupidi pushed 1 commit to master: 72526ed18c: validator: improved uproposed txs retriaval logic deki: what seed node are you using you need to set it to the seed node i pasted above seeds=["tcp+tls://dasman.xyz:5262"] it's not a tor node, so you will need to enable additional transports allowed_transports = ["tor", "tor+tls", "tcp+tls"] . deki: thanks for responding! I'll take a look at the CS50. Less than a month ago I completed the fundamental computer science course at Roppers Academy (a free platform whose phrase is that "they teach you how to learn" and "0 videos, pure text" where they forced you to fend for yourself and search the docs and man-pages) and the last thing I focused on was Python. I already knew the basics, so I went straight to the "automate the boring stuff" chapters to manipulate the system. and I am currently practicing making a video game with "python crash" to learn how to design classes and structure code. haumea: thanks for responding! When I can I will enter the libera irc channel, I think it will be very helpful to me. code: sounds like a good approach, I recommend getting into Rust once you feel confident with Python *coda draoi: yes you're right, I forgot to use the correct seed node. Will try it again now draoi: still getting the same error: Sync could nto find any DAG tips I followed these steps, including installing Tor: https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html Title: tor-darkirc - The DarkFi Book only difference I noticed is it doesn't create the config file here: ~/.config/darkirc/darkirc_config.toml rather it's at /.config/darkfi/darkirc_config.toml so there's no darkirc folder, is that okay? network settings are exactly the same as Step 3, except for the seed and the allow_transports you sent me bbl gm gm gm : gm deku: did you enable all transports? there might not be enough tor nodes on the network also, the seed node does not support tor try connecting to the seed with a normal (non tor) config i.e. use default config but change the seed to the one specified see if that works i'm also testing rn and now getting this: 08:19:25 [INFO] [P2P] Channel inbound connection tcp+tls://dasman.xyz:5262 disconnected 08:19:25 [ERROR] send_version() failed: Channel stopped dasman: is your node running fine? that's the seed node it's expected behavior to connect and then disconnect to the seed node is this on your seed or what node? no, just on darkirc now, lilith not running actually, i think that's probably dasman's node doing its refinery https://darkrenaissance.github.io/darkfi/arch/p2p-network.html#hostlist-filtering Title: P2P Network - The DarkFi Book nodes periodically connect and then disconnect to test their connections I see, hmm for some reason now it doesn't want to sync dag but your connection is fine, right? i have 3 nodes that are running fine, but the dag should always sync what do you have in your hostlist? anchorlist tcp+tls://dasman.xyz:5262 1707378122 oh no, wait that's for lilith .local/darkfi/darkirc/hostlist.tsv is actually empty that means the seed sync process failed somehow it's configured with dasman node right, not your own seed? can you paste the full debug output into pastenym or something (from running the node to dag sync failed) draoi: yup, with dasman's seed, one sec : @draoi pushed 3 commits to master: a20ab3b5a8: doc: add dnet code comments : @draoi pushed 3 commits to master: f189b71fd7: lilith: add checks to whitelist refinery... https://pastenym.ch/#/8PQqAlgV&key=5e598ded05180e7b2b1002512baeb720 Title: Pastenym ty tried removing db but didn't help dasman: you might want to debuild lilith given the recent change errorist: this is the error Failure during sync seed session #0 [tcp+tls://dasman.xyz:5262]: Channel stopped maybe the commit i just made will help ok cool, I remember it working fine yesterday, maybe I crashed dasman's node from all the testing I did yesterday :D maybe a restart will fix it if dasman could provide some debug output for his lilith that would also be helpful errorist: the node should never crash ah maybe he has some inbound connection limit set but i think he corrected that yday i am getting this also: 08:45:47 [WARN] net::session::seedsync_session: [P2P] Failure during sync seed session #0 [tcp+tls://dasman.xyz:5262]: Channel stopped ok on a seperate node: [WARN] net::session:seedsync_session: [P2P] Failure contacting seed ://dasman.xyz:5262]: Connection failed (debug output looks messed up the the addr is correct) dasman: when you are online maybe you can share your lilith logs and also rebuild latest master hihi o/ coda: i'll help you, just feel free to ask anytime upgrayedd: Money::GenesisMint should also have an exception in validator to not check fees upgrayedd: And should be allowed only in genesis block git log main..pp ^ this should show the commits diff between both branches, right? You use git diff for a diff brawndo: we verify genesis block using false in verify_fees bool https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/verification.rs#L94 Title: darkfi/src/validator/verification.rs at master - darkrenaissance/darkfi - Codeberg.org and producer tx must be the Transaction::default() (aka empty) one so no further fee checks i just want the commit list we check that verifying slot must only be genesis(0) in the exec section https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/contract/money/src/entrypoint/genesis_mint_v1.rs#L83-L91 Title: darkfi/src/contract/money/src/entrypoint/genesis_mint_v1.rs at master - darkrenaissance/darkfi - Codeberg.org upgrayedd: ahh ok upgrayedd: I grepped for GenesisMint in validator and didn't see anything so ok cool it's just one way haumea: git log a..b would just show you commits in one of the branches and not in the other yep ty halo2_proofs = {git="https://github.com/parazyd/halo2", branch="v4"} so we're just locking to a specific tag on halo2, is this preferable to just referencing halo2 directly? Yeah there's quite a few patches in that repo Not upstream oh i was looking at main, not v4, thanks brawndo: validator only checks if transactions are not PoWReward and GenesisMint will always fail if not on genesis so we are covered there upgrayedd: Well ok, as long as it's protected in the exec section Yeah iirc I showcase that in the tests ok i see there's one patch: 397c77cf97e30587d4cdc9676679c6def268daca lmc https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/contract/money/tests/genesis_mint.rs#L81-L91 Title: darkfi/src/contract/money/tests/genesis_mint.rs at master - darkrenaissance/darkfi - Codeberg.org haumea: Yeah just a few things squashed into one haumea: It's VK PK serialisation, and dynamic circuit configuration using given params nice, good to know. I'll add a comment to Cargo.toml did ying tong review this? There's more commits upstream but none are related to code: https://github.com/parazyd/halo2/compare/v4...zcash%3Ahalo2%3Amain Title: Comparing parazyd:v4...zcash:main · parazyd/halo2 · GitHub It's just doc stuff and misc haumea: It's reviewed by PSE aha great i'll make note of this. where is it reviewed by PSE? it's a pull req they submitted? https://github.com/privacy-scaling-explorations/halo2 Title: GitHub - privacy-scaling-explorations/halo2 It's patches from here cool We could maybe use that repo directly, I haven't tried They' They're mostly focused on KZG though yeah it's fine, just want to have a record of this kk https://github.com/ssjeon-p/ecip Title: GitHub - ssjeon-p/ecip btw this is liam's algo in halo2 it's faster EC mult and doesn't need all that weird stuff with the ECC chip Ah cool maybe we can extract that into a gadget Just needs a proper license :P yeah we could add it later, i have a sage impl so we could rewrite it also another impl https://github.com/levs57/halo2-liam-eagen-msm Title: GitHub - levs57/halo2-liam-eagen-msm Nice brb b hi, i'm trying to compile on mac :/ and running in to an issue with unknown feature "stdsimd" due to dependency on ahash 0.7.7, which is fixed in ahash 0.8.7 per https://github.com/tkaitchuck/aHash/issues/200#issuecomment-1928956777 Title: error[E0635]: unknown feature `stdsimd` · Issue #200 · tkaitchuck/aHash · GitHub is it possible to switch to ahash 0.8.7 errorist: you had this issue recently, right? aiya: the issue is due to nightly breaking crates that haven't updated their build.rs script haumea: yes, upgrayedd helped me fix it you can try to force the ahash version directly, but I'm not sure it won't try to bring the older version, since its minor version is different but let me check just to be sure the project relies on both 0.7.7 and 0.8.7 for some reason lib.rs:33:42 is where the make is erroring out ahash is an indirect dependency so sub/dependency crates pull both versions you can check on the deps tree who pulls the old one correct, used by hashbrown 0.12.3 and 0.14.3 https://github.com/rust-lang/hashbrown/pull/496 Title: Bump ahash to fix missing `stdsimd` with nightly by chriscerie · Pull Request #496 · rust-lang/hashbrown · GitHub fun fact, thats an official rust-lang crate exactly, 0.8.7 has the fix. I'm trying to work on a mac with IntelliJ RustRover IDE and the mac build and running tests would really help we have a workaround you can apply please don't advertise IDEs, we don't care here :D execute these, in order: rustup toolchain install nightly-2024-02-01 rustup target add wasm32-unknown-unknown --toolchain nightly-2024-02-01 downloading after that you have to edit the repo config sed -i rust-toolchain.toml -e "s|nightly|nightly-2024-02-01|g" sed -i Makefile -e "s|+nightly||g" then you should be good to go, run make clippy or make test to verify haumea: should I add these instructions in the doc? we don't know when crates will be fixed again also thats the reason test fail btw in pipelines draoi: everything look very normal all clean, I can connect to seed and get peers no problem I'll rebuild now running make test now, had to brew install gsed as the sed command was failing on mac brew install gnu-sed well you could edit the file manually directly :D :3 clippy finished building, make test fails with library 'sqlcipher' not found, I have installed sqlcipher 4.5.6 dasman: can you share the logs? i want to see what happens on the lilith side when peers are giving seedsync errors I haven't saved any :( will do now, debug level ty aiya: you need the dev version errorist, deki: you can retry connecting your darkirc nodes to dasman seed upgrayedd: thanks, and yes the doc instructions need an update upgrayedd: always good to document things so we can link to people, idk if it counts as "official advice", more like a workaround section if ppl have this error but you're the boss here yeah will add a note section later I'm fighting clang now XD you need the sacred artifact, the amulet of debugging you mean printf()? haha recently i met my friend eric who's a very experienced dev, and he was shocked devs today don't use debuggers and profilers, just print I don't think that statement is true, especially in the corpo/soy space most of them use the integrated IDE breaking points to debug I would argue the opposite is true, people don't(when they should) use just print only debugger I would ever use: https://github.com/aggstam/debugger Title: GitHub - aggstam/debugger: Simplifying the proper way to debug a program. without breakpoints, how do you read the current state of the objects and data in memory? aiya: print it *current* gdb gud : test test back draoi: just came back, will try it again dasman: can connect now to your node again : sup : ACTION waves : :) hah! eric says he regularly runs his programs through a profiler which gives you a good overview you don't have overwise *otherwise so he knows where program spends most of the time and key places to focus for optimization haumea: https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/dao_vote.rs#L79 haumea: Should this be the snapshotted tree instead? yes correct ACK fixing nice i tried to remove clear_inputs, but it means i have to delete faucet stuff from code I'm soon done with all of it No worries I'll remove Already did, just uncommitted oh ok, we're halfway through the same thing ok i'll stop this then Just finishing adding fees to dao::vote and dao::exec in the test-harness And then I need to make sure all the tests pass and I'll push https://agorism.dev/uploads/foo.diff.txt Been at this for many days lol But it's quite an improvement shall i stop this one? aha nice Yeah you can I have more or less the same thing in my repo i noticed for FeeParams, it uses the input/output from xfer Correct but we don't need all those values so it could be optimized to be smaller if that's what you want like value and token commit for example Maybe, I just took the simpler approach yeah thats what i thought To be done asap you like code reuse which is good too ok makes sense upgrayedd: https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/dao_exec.rs#L211-L218 Is this the correct conversion? https://termbin.com/s7j0 brawndo: yes Nice, I got it then :D aiya: all smooth? : @skoupidi pushed 1 commit to master: 0957973ff6: README: development nightly notes added haumea: https://codeberg.org/darkrenaissance/darkfi#living-on-the-cutting-edge Title: darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. - Codeberg.org dasman: could you take a look at your lilith logs? [WARN] net::session::seedsync_session: [P2P] Failure during sync seed session #0 [tcp+tls://dasman.xyz:5262]: Channel stopped if you could just send the entire log in paste nym or something that would be grat s/grat/great just want to see if everything's ok or not not sure why ppl couldn't connect earlier still getting the 'Failed syncing DAG' error :/ my allowed transports is: ["tor", "tor+tls", "tcp+tls"] and seeds = ["tcp+tls://dasman.xyz:5262"] also my hostlist.tsv is empty, not sure if that matters deki: can you try with the default config? just change the seed node okay, so seeds will just be what's displayed here? https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html Title: tor-darkirc - The DarkFi Book i am saying don't do the tor setup ah right backup your current config (save as tor-darkirc-config.toml) then rerun darkirc it will say 'created new config at [...]' then open the new config and change the seed to the dasman seed then try to conenct s/conenct/connect okay : got it working : @parazyd pushed 5 commits to master: 00aefdded5: contract/money: Faucet cleanup... : @parazyd pushed 5 commits to master: c7f287d4bd: contract/money: Add FEE_CALL_GAS constant, and export some structs : @parazyd pushed 5 commits to master: 2f7c4b4e17: contract/money: Implement std::hash::Hash for OwnCoin : @parazyd pushed 5 commits to master: 8828438d8f: contract/test-harness: Cleanup and addition of tx fees. : @parazyd pushed 5 commits to master: a516ec90e0: Fixup rebase artifacts Tests need some rewriting : nice, welcome deki There is now automatic logic of collecting owncoins upon execution of txs in the harness So logic needs to be rewritten do we need fee logic for all of the tests? It's there It's optional ok : so what's cool about darkirc as opposed to ircd is that it has msg retention up to 24hrs : so you can be offline and still receive msgs (within that window) haumea: The point is you can enforce the fees wherever you want now All the functions support it what needs to be rewritten? The tests for each contract Mostly removing stuff we have a few days until review, i finished my cleanup tasks And modifying some asserts related to owncoins i was planning to look over spec and work a bit more there, but can also help with code If you can, maybe do the DAO contract and I can do deployooor and money brawndo: maybe also nuke bin/faucetd? upgrayedd: It's gone there's also the swap stuff which doesn't let DAOs do swaps rn haumea: You can just run `make clippy` inside the dao dir ok brawndo: (x) doubt oh I misread your sentence Yeah the daemon is still there :D nw Poof : @parazyd pushed 1 commit to master: 2d06c44cc0: bin/faucetd: Remove code.... draoi: darkirc i deleted config, and changed seeds = ["tcp+tls://dasman.xyz:5262"] anything else i must do? Error: DagSyncFailed stops with that that shouldn't happen : draoi: it's not a window btw, tree resets 00:00utc can you share the logs does it say seedsync failed dasman can you check the lilith logs? codeberg down? git push just hangin' try again sometimes it hangs : ++ draoi: one sec : dasman: a lot of ppl reporting they cannot connect to your seed https://www.youtube.com/watch?v=ffET_NmfZjc&pp=ygUTanVzdCBoYW5naW5nIGFyb3VuZA%3D%3D Title: Beetlejuice Just Hanging Around - Original Video - YouTube : can you share the logs? git push be like <3 oh haha just msgd u on the otherside dasman : yy sure just one sec XD ty haumea: So the work is mostly just manual work like this https://codeberg.org/darkrenaissance/darkfi/commit/7e271717192c783493d6a98d6b75795ee9c8a115 draoi: https://agorism.dev/uploads/darkirc.log.txt ok ty brawndo Title: contract/deployooor: Port tests to updated test-harness · 7e27171719 - darkrenaissance/darkfi - Codeberg.org haumea: did you rebuild darkirc? less -r /tmp/darkirc.log.txt yes will try again draoi: ^ try that command the logs are showing something i would not expect ok i just did make distclean trying with 7e271717192c783493d6a98d6b75795ee9c8a115 (master) haumea, I changed my config file back to the default values, this is what I have now (only showing what I changed, left everything as is) https://pastebin.com/h8RLceKV Title: ## Outbound connection slotsoutbound_connections = 8#outbound_connect_timeou - Pastebin.com deki: did you change anything apart from the seeds = value? draoi: https://agorism.dev/uploads/darkirc.log2.txt haumea: I changed allow_transports = ["tor", "tor+tls", "tcp+tls"], seeds= dasman, inbound = ["tcp://127.0.0.1:25551"] and inbound_connections = 8 oh i'm using XD seeds = ["tcp+tls://dasman.xyz:5262"] you shouldn't need to change anything in the default config for it to work yes that's right sorry I was lazy and didn't finish it aside from the seed ok yeah that's all i did deki btw 127.0.0.1 will only allow connections from the same pc, for enabling inbound from the internet, you need 0.0.0.0 yeah so the things I just mentioned are probably redundant, except the seed ah thanks, didn't realise haumea: the logs are showing that you are connecting to the seed, but the seed is not sharing peers with you i would need to see the lilith debug output and hostlist to know what's happening upgrayedd: ld: library 'sqlcipher' not found even after brew install sqlcipher, there is no different sqlcipher-dev installer for mac dasman: you can inspect the lilith hostlist using dnet (without needing to stop the node) i think i should probs deploy a lilith for more efficient debugging aiya: check how you can compile sqlcipher on your own by using the static build guide https://darkrenaissance.github.io/darkfi/dev/dev.html?highlight=static#static-binary-builds or the corresponding docker file https://codeberg.org/darkrenaissance/darkfi/src/branch/master/contrib/docker/static.Dockerfile#L25-L33 Title: Development - The DarkFi Book Title: darkfi/contrib/docker/static.Dockerfile at master - darkrenaissance/darkfi - Codeberg.org as references we don't support macos haumea: Actually don't worry about the tests, I'll manage on my own can't sync dag now too now it worked so is the idea to run the darkirc node for some period of time to see how stable it is? tcp+tls://ohsnap.oops.wtf:31337 my seed node should be working now if anyone wants to give it a try errorist: connecting to your seed node, that means those of us connected will only be able to converse with each other? Want to make sure i understand this no, we should all be able to communicate it's good to have multiple seeds in your config for redundancy I guess but i'm no expert ^^ ah ok, so it's more of like a host? And I can leave multiple seeds uncommented then? I don't need to leave only one yup, can have multiple : @parazyd pushed 4 commits to master: 7e27171719: contract/deployooor: Port tests to updated test-harness : @parazyd pushed 4 commits to master: 23abd8c526: contract/money: Port genesis-mint test : @parazyd pushed 4 commits to master: e3b785a986: contract/money: Port integration test : @parazyd pushed 4 commits to master: 427dbce106: contract/money: Port mint-pay-swap test : @parazyd pushed 1 commit to master: 2125cf7c98: contract/test-harness: Append missing proof to vks : @skoupidi pushed 2 commits to master: 1d82f5c260: README: development nightly notes improved : @skoupidi pushed 2 commits to master: b30c20379c: darkfid: integrated latest changes ok brawndo ty woah errorist, your seed worked! yayy nice on one testing now deki: this is a p2p tutorial that explains all the basic concepts around the p2p network https://darkrenaissance.github.io/darkfi/learn/dchat/dchat.html Title: P2P API Tutorial - The DarkFi Book also check https://darkrenaissance.github.io/darkfi/arch/p2p-network.html Title: P2P Network - The DarkFi Book cute hostname errorist : i dont see the messages : hihi : hiiiiiiiiiiii : hihi : hihi woah they all sent those are the prev ones draoi: thanks :D : hey haumea : welcome to the other side : is inbound_connections = 0 mean unlimited or 0? : it means 0, the default is 20 now to : s/to/tho : ok : it's a problem if it's zero cos it means you won't send your addr to seed node or other nodes, since ping_self is disabled : ok i put 100 : kewl : do you have an external addr tho? : when ppl setup tor and provide instructions, i'll set that up too : yeah : tor is only partially deployed now afaik. errorist is working on a tor seed : btw we have 24 hour for resetting tree : maybe we should do like 8 hours, but go back 1-2 weeks, starting with most recent tree : later we can see how to enable support in clients like weechat or custom UI : but good to have these presets working before people start running lots of nodes : draoi: my tor node should be running, but haven't tested it: tor://6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion:25551 : I have outbound_connections = 8, will set it to 20 : we were talking about inbound connections above fyi : ah yeah good idea : *inbound : ah kk : yeah the more the merrier for seeds especially : should i increase outbound too or not? : outbound default is 0 afaik, so if you want an outbound node you will also need to set that to something reasonable ran make coverage and eventgraph_propagation failed tests.rs:164:9 : encrypt: maybe you can try inspecting your lilith's hostlist by running dnet : should i put outbound? : (for a server node) : yes : i have = 8 : that's probs fine, idk really : ok i put 10 : kewl errorist: congrats on setting up the seed node! I'm helping on board a devops contributor who plans to run a seed node as well, any pointers to getting started? aiya: thanks, it took me a while but think I now understand how it works thanks to draoi and dasman :) I can try writing a noob friendly guy on how to run seed nodes aiya: tests are broken now, we are actively fixing them also why run coverage? it will take hella lot as its slow af ty : for some reason i'm not always getting new messages, have to quit weechat and reopen and then I see them, strange im trying everything on the mac and setting up a build environment *guide lol : that shouldn't happen aiya: glhf :D feel free to PR on stuff thats missing in the deps config, etc : let me restart darkirc : @skoupidi pushed 1 commit to master: d81be20ec1: src/error.rs: minor cleanup i'm getting this: 15:23:58 [ERROR] send_version() failed: Channel stopped but still i synced successfully just now : hihi : welcome! : uh well the messages shown on ircd are different : three are missing : Checking mirror bot : test back : test back test test back : test : test back test back : @skoupidi pushed 2 commits to master: eaecebf47c: darkfid: cleaned up all slot references : @skoupidi pushed 2 commits to master: 01f88db53b: drk: cleaned up all slot references one of bots was on old darkirc v0.4.1, that caused send_version() error, all should be clean now aha kk : @skoupidi pushed 1 commit to master: 47b0e2905f: doc/arch: removed pos related stuff and added a high level description of PoW logic : @skoupidi pushed 1 commit to master: 3e7bc53af0: doc: removed faucetd references gm : @parazyd pushed 2 commits to master: a8297adbf4: contract/money: Port token-mint test : @parazyd pushed 2 commits to master: 9c44bfb9ed: contract/money: Delete redundant tests afk bbl cya upgrayedd: What would be the right way to add a block to the blockchain in the test-harness now that there are no slots anymore? I want this to work: https://github.com/darkrenaissance/darkfi/blob/master/src/contract/test-harness/src/money_pow_reward.rs#L55-L58 brawndo: it fails now? since you are using fixed difficulty, you can do something like this: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/darkfid/src/tests/harness.rs#L168-L230 Title: darkfi/bin/darkfid/src/tests/harness.rs at master - darkrenaissance/darkfi - Codeberg.org so a holder generates the next block, and then you add it to all of them using validator.add_blocks() so instead of executing the PoWReward tx, you just generate a block for that holder and everyone adds it with the fixed difficulty the difficulty check should always pass : @skoupidi pushed 1 commit to master: 8eef36b898: blockchain/header: changed nonce from pallas::Base to u64 : @parazyd pushed 2 commits to master: 882c7da804: contract/money: Add malicious test case for genesis-mint : @parazyd pushed 2 commits to master: 7d95c7f09e: contract/money: WIP pow-reward test port upgrayedd: make test-pow-reward upgrayedd: It fails at VRF proof verification The next commented one, at line 93 But that's correct, as I didn't add a block to the blockchain, we need a way to do that brawndo: yeah like the link I send, it should be easy to add ok ty b : @parazyd pushed 2 commits to master: f007d1a732: contract/money: Remove pow-reward test... : @parazyd pushed 2 commits to master: 09e7475d58: contract/test-harness: Add a generate_block() function.... : @parazyd pushed 1 commit to master: c7ea1a2c08: contract/dao: Clippy lints : @parazyd pushed 1 commit to master: f4c3a059f3: contract/test-harness: Remove airdrop module. : @skoupidi pushed 1 commit to master: d376d2d43a: contract/money/tests/integration: fixed failing test due to erroneous VRF parameters Hello gm gm currently learning about elliptic curve group law nice : test test back : test back haumea: did you have some issue with the book not updating before and if so how did you solve it? i didnt fix it, bra*ndo did ah hmm, it's weird there's changes to the book which are viewable locally (and SUMMARY.md is updated) but the website blog shows the old stuff s/blog/book where? oh yeah Start Here is missing https://github.com/darkrenaissance/darkfi/actions Title: Workflow runs · darkrenaissance/darkfi · GitHub everything failing it's my fault : @zero pushed 1 commit to master: e6bf38d0aa: book: fix broken include links oh it's this error causing it to fail error[E0635]: unknown feature `stdsimd` --> /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/ahash-0.7.7/src/lib.rs:33:42 that's why all jobs are failing : hihi https://users.rust-lang.org/t/error-e0635-unknown-feature-stdsimd/106445/2 Title: error[E0635]: unknown feature `stdsimd` - #2 by jofas - help - The Rust Programming Language Forum 6894572da6eb8b0906b0f926e5911db651e8d493e29b227e97da3d0ebc10d9e0146d5eb1f5a67e61 opps https://github.com/rust-lang/rust/commit/ea37e8091fe87ae0a7e204c034e7d55061e56790 Title: Auto merge of #117372 - Amanieu:stdarch_update, r=Mark-Simulacrum · rust-lang/rust@ea37e80 · GitHub https://github.com/tkaitchuck/aHash/issues/200 Title: error[E0635]: unknown feature `stdsimd` · Issue #200 · tkaitchuck/aHash · GitHub : @draoi pushed 1 commit to master: 67216e14fe: chore: cargo update seems this build is still failing due to dependencies using dependencies using the old version of ahash https://github.com/wasmerio/wasmer/issues/4445 Title: Broken on nightly because of ahash 0.7 (aka "update hashbrown") · Issue #4445 · wasmerio/wasmer · GitHub ah nice one seems arti-client also needs to update some depdencies but nothing on their issues about this haumea: You did the funcid changes, could you also please update the FeeV1 proof with the changes? No worries about code, just the zkas proof please oh actually it seems fine I think what should i look at? nw it's just that the proof is failing to verify I must have missed something I thought initially it's related to funcid changes, but that stays the same in the proofs i.e. if there is nothing, spend_hook should be 0 biab brawndo: use zk::export_witness_json() to debug then compare the public inputs inside the json and inside the wasm (assuming the exported json passes zkrunner) this is how i debug all zk proofs 1. is the proof sound? (so it works with witness/publics when built) 2. do the public inputs in wasm match what's expected? use zkrunner for #1, use the debug log for #2 okay will do Thanks ah yes there's an extra public input in the FeeRevealed::to_vec() In zkas it's using constrain_equal_base() which does not use an Instance But the vec kept the spend hook ok ez :) yay it works now kewl !topic artifact abstraction for wallet Builders Added topic: artifact abstraction for wallet Builders (by haumea) !deltopic 0 Removed topic 0 when should i start fixing drk? upgrayedd is working on it, so you 2 can organise ok ty : @parazyd pushed 1 commit to master: 8c2a7c65a1: contract/money: Final integration test fixes... I finished porting the money tests Now DAO and we gud master I can't compile anymore tho just looking at the test-harness and it occurs to me that a lot of this API could be migrated to a user friendly API In what sense? fuuuck im getting the stdsimd issue now too ffs rust brawndo: you know like dao_mint, money_transfer .etc that return a tx and stuff You can revert 67216e14fe31f3943b6b96c3466260eb2a7b84b0 Yeah what about it? https://codeberg.org/darkrenaissance/darkfi#living-on-the-cutting-edge Title: darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. - Codeberg.org you don't need to revert draoi, haumea: we already have a workaround on darkfi :D brawndo: so these functions are used in the test-harness currently, but i'm saying they could be also generalized a little and used in DRK or other wallets like the relation between GDK and GTK note: i mean the TestHarness functions to create the txs (not the contract functions) like money_token.rs has fn token_mint() Well perhaps but they're quite geared towards testing i think it's possible to make some changes and share the code rather than full on code duplication for `drk` Sure i'll look into it when we're ready not sure if this is related to the issues you guys are having, but yesterday I tried building darkfi in a docker container, long story short I had some dependency issue and ran cargo update to fix it i also want to make the cli tool more stateful like git cos often passing around big strings is kinda annoying i also had this crazy idea of integrating darkirc into it for message passing so for example you could send money to people's nicks or doing DAO stuff / swaps might make also integrating RLN easier JSONRPC is your friend android doesn't allow spawning daemons like that, they have their own ghey interface https://developer.android.com/develop/background-work/services Title: Services overview  |  Background work  |  Android Developers haumea: you are entering systemd "ideas" territory.... lol true this is how the linux kernel works, you have pluggable modules that expose an introspectable common interface also how blender became big and successful https://www.geoffreylitt.com/2019/07/29/browser-extensions Title: Browser extensions are underrated: the promise of hackable software or winamp : @skoupidi pushed 1 commit to master: 7155548f45: Revert "chore: cargo update"... draoi: use the guide I send while the crates are not updated we can also updated the github pipelines so tests/book can use the workaround until that unless you want metamask/browser, we have to enable users to create extensions for software... and we can easily do that with current functionality (pluggable architecture) while gaining the benefits of: android/cross platform, and can easily build gui on top while also forever supporting cli haumea: Why is the TokenID using a fixed blind now? only for a single token, but not for other tokens so correct would be to store a HashMap of TokenID -> token_blind aha ah my bad upgrayedd ty : @skoupidi pushed 1 commit to master: 1eaf5fae35: .github/workflows: temporary workaround for nightly build ty quick Q about the consensus.md doc upgrayedd: i see you are saying 'previous previous proposal' and 'previous previous block producer'- is that a typo or do you mean 2 blocks previous/ 2 proposals previous? yeah I mean the previous block of the previous block from current, so yeah 2 blocks before current ok gotcha let me add a missing note about VRF so its complete go ahead 1 other thing wasn't sure about, when you are talking about the nonce you say miners "try to find a nonce that makes the blocks header hash bytes produce a number" do you mean the block header contains the nonce? or wdym by the block head hash bytes produces a num sec when you hash data, it produces a number which is 32 bytes long (the representation) but the data you're hashing deliberately contains a field called a nonce, which can be anything ^^ changing the nonce changes the number output by the hash exactly! thats the PoW mining logic re: block hash is < difficulty target so it's just selecting a new nonce (could be random, could be according to some algo like nonce += 1) ok so there is a nonce in the block header such that when you hash it it produces a number that... yep ok got it yeah think of nonce as like the only modifiable thing in the block where you change it in order to find the produced block hash that its number is less that the target (thats mining btw) yeah the grammar thru me off block header hash bytes :) Well thats the point of proofreading :D ++ I obviously write biased as fuck script/research/pow/src/main.rs Study this I consider most of this common knowledge XD lmk when the new thing is pushed i have a few changes made just will stash them : @skoupidi pushed 1 commit to master: 260a0d99d8: doc/arch/consensus: vrf proof info added draoi: pushed, glhf btw the book should have been updated now in the site : @parazyd pushed 2 commits to master: 98521da0de: contract/dao: Integration test port : @parazyd pushed 2 commits to master: f04a44d255: chore: Clippy lints gotem haumea: All is ready :)( nice, we're ready basically always_has_been.jpg ty upgrayedd : @skoupidi pushed 1 commit to master: 6916cff694: validator/consensus: fork rank logic minor optimization : @parazyd pushed 1 commit to master: 68c9bc8418: consensus: Use BigUint for block ranking to obtain higher resolution draoi: for previous previous wording, maybe second to last is better term? second to last implies something else i think former previous block might work actually maybe second to last is good well brawndo suggested n-2 which is even more explicit n-2 is perf $(n - 2)$-block is best formatting wise : @draoi pushed 3 commits to master: be7ce54770: doc: proofedit doc/src/arch/consensus.md : @draoi pushed 3 commits to master: 26a7b352ed: doc: fmt doc/src/arch/consensus.md : @draoi pushed 3 commits to master: f2666e77bc: doc: small tweaks doc/src/arch/consensus.md lmk how that looks upgrayedd the main diff is between first 2 commits (fmt messes up the diff) s/ first 2 commits/ first commit draoi: pulling and checking draoi: there are some edits that were replaced check 68c9bc84181acc30b55877d7d735785381babdb8 thats the correct rank computation steps checking ah yep, fixing https://codeberg.org/darkrenaissance/darkfi/src/branch/master/doc/src/arch/consensus.md?display=source#L74-L76 Title: darkfi/doc/src/arch/consensus.md at master - darkrenaissance/darkfi - Codeberg.org this looks weird it renders here: https://darkrenaissance.github.io/darkfi/arch/consensus.html Title: Consensus - The DarkFi Book as I read it I unders its 3 things, the n-2 proposal, a `VRF` proof and `nonce` s,unders,understand ah thought u meant format no no lol the modules is the n-2 proposal's `VRF` proof big int and `nonce` so 2 things ++ s,modules,modulus these are wrong https://codeberg.org/darkrenaissance/darkfi/src/branch/master/doc/src/arch/consensus.md?display=source#L165-L167 Title: darkfi/doc/src/arch/consensus.md at master - darkrenaissance/darkfi - Codeberg.org its not the n-1 block, its the last block(n) in the fork chain since we defined n-1 as previous, and n-2 as previous' previous other that these rest look fine draoi: so the question is after reading it all, do you understand how the logic works? yeah it makes sense draoi: noice, so make the last edits and its g2g ++ : @draoi pushed 1 commit to master: 5dcf458864: doc: fix doc/src/arch/consensus.md... biab upgrayedd: where are you converting blake3 to pallas::Base? it should be blake2b using pallas::Base::from_uniform_bytes() otherwise it's less secure https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#hashing-to-fp Title: Cryptographic Schemes - The DarkFi Book darkfi/src/contract/dao/src/model.rs:123 haumea: I don't think we do it anywhere where did you caught that up? if you mean in the VRF we already changed it to use big int here 68c9bc84181acc30b55877d7d735785381babdb8 i saw it in the doc yeah its already changed, that was old impl probably before the blake2b find ok cool btw can you read it now(as its in final form) for feedback? doc/src/arch/consensus.md will do, just will eat now, so tmrw if urgent i can read tonight well I would prefer the feedback asap as I want to share it around for external feedback since later we need to make the proper formal paper : gn : gm greets gm : gm is it a mistake that consensus.md#Ranking is talking about pallas::Base? also it says ECVRF but then starts using VRF instead > If more than one fork exists with same rank, the node will not finalize any block proposals. does this keep going until the tie is broken? ok nvm it explains after haumea: I think our impl is `ECVRF`, but kept is as `VRF` after for simplicity is it wrong? well it's highlighted as a label `ECfoo` but then after `foo` is used. NBD how is epoch in the header calculated? is it calculated from the height? do we need this field or is it redundant? i would either use ECVRF each time or do ERCVRF (VRF) and then call it VRF from then epoch is used to define the rewards periods, and is derived by the height we had the discussion again couple of weeks ago, but tbh don't remember if we discussed to remove it entirely from the header : @zero pushed 1 commit to master: 485bec0471: book: correct typo if it's calculated from the height, then you don't need it check src/sdk/src/blockchain.rs ok ah yes i'd remove it, just extra data to hash and is useless kk will remove btw the doc also acts as a "mini" spec for the structs where does the block producer public key for the signatures come from? it's nice, i like it the public key that signed the block must be the same one used in the coinbase/producer transaction so durring block txs verification, we just grab that and use it to verify the block signature against ok you mean for the signature in the coinbase tx yy the pubkey is reused, ok thats fine not exactly each block we produce has a different key which we derive bin/darkfid/src/task/miner.rs#L158-L163 k ty > To calculate each fork rank, we simply multiply the sum of every block proposal's rank in the fork by the fork's length. you mean just sum the ranks of the blocks, right? yeah and then multiply that sum by the forks proposals len thats just proposals, not finalized blocks so empty fork(no proposals after canonical) will have rank 0 there are n proposal blocks, you sum the rank for the n blocks, then you multiply it by n? yes exactly why not just sum the rank for the n blocks? so we try to preserve the shatoshi logic for keeping the fork with more proposals ok so it's giving much stronger preference to longer forks exactly! why sum the ranks then? could you multiply the last rank by n? i guess it must always be increasing no its not increasing the ranks are > 0, and you sum then, then multiply by n ... so it always increases, never decreases check src/validator/utils.rs#L103 ok wait you mean last forks rank or last proposal in fork rank? if you mean the former yeah then the logic stands the rank of a fork. If the fork has blocks F = (M1 ... Mn) with rank(Mi) > 0 yy I thought you mean last block rank aha ok so the fork rank acts as an accumulator that what you mean right? given a fork F of length n, and the fork F' of length n+1 created by appending M{n+1} to F then always total_rank(F') > total_rank(F) just wondering if that's a desired property since it comes from summing the blocks in F yeah thats whats happenning here its a desired property since the stronger preference for longer forks ok so the VRF ranking is just a random number to break ties exactly! aha ok nice, that makes sense and its based on the actual sequence so you can't do long range attacks or shit like that let prefix = pallas::Base::from_raw([4, 0, 0, 0]); since you can never produce a valid high vrf in advance, so your block can rank higher darkfi/bin/darkfid/src/task/miner.rs:161 we can make that a const value, and use from_u64() sure sure, that code is a wip so all improvements/optimizations are good to do oh nice, i didn't think of long range attacks but certainly it's a risk for smaller blockchains btw could you add the math definition you just described? yeah one sec I'm not that into so formal stuff, and we will need it later for formal definitions so better have it there and other formal definitions you have/can think of are highly appreciated I made the doc to be easily digestable by someone without that much underlining knowledge it's very readable so you get a grasp of the consensus logic right? yep it's very clear ++ I'm also really proud of the ascii forks art :D they are my favourite : @zero pushed 1 commit to master: 0d7d306a70: book/consensus: add formalization about fork rankings always increasing it looks good, i wouldn't add any math for graph theory stuff it's better for formulas/calculations otherwise you get stuff like this: https://agorism.dev/uploads/screenshot-1707734085.png which is literally describing a graph using math lol yeah no these stuff are only needed in formal papers, not here ok will remove the epoch now and I guess we are g2g : @skoupidi pushed 1 commit to master: 7f0f954671: blockchain/header: removed redundant epoch number !list Topics: 1. DRK vesting and mining contract mechanics (by haumea) 2. configurable generators (by haumea) 3. not no entrypoint feature (by haumea) biab !deltopic 1 Removed topic 1 !deltopic 2 Removed topic 2 !deltopic 3 No topics (discussed last week) !list Topics: 1. configurable generators (by haumea) !deltopic 1 Removed topic 1 !topic tx fees client Added topic: tx fees client (by brawndo) !topic tx fees different for issuance vs transfer Added topic: tx fees different for issuance vs transfer (by aiya) !list !list Topics: 1. tx fees client (by brawndo) 2. tx fees different for issuance vs transfer (by aiya) the android stuff is very delicate with versions of the SDK, see for example darkfi/bin/darkirc/android.Dockerfile:41 and to make an APK for things like miniquad, you need a lot of special env variables and custom setup: https://github.com/not-fl3/cargo-quad-apk Title: GitHub - not-fl3/cargo-quad-apk: Glue between Rust and Android i tried before setting it on my dev env but it's a lot of work to maintain and also replicate so can i add the docker target to darkirc Makefile? the android build instrs in the book are broken If the extra stuff don't correspond to actual repo building I wouldn't add them You can copy it and add your extra stuff as a generic android docker builder wdym? well if the target are for building miniquad stuff, they are not for darkfi bins therefore irrelevant to that docker android ndk isn't in the void linux repos i don't see the SDK either... weird hence the generic android docker builder probably based on ubuntu since it has the most updated packages (comperativly to other debian based distros) i'm saying the Makefile used to have a target to build using the dockerfile or maybe debian sed, which is supposed to be rolling release oh wait but it's removed and instead you must install the environment manually now I understood another thing but the android environment is difficult to setup, and i had to build sqlite3 manually, install specific android versions for special features... it's very difficult yeah still you can create a general docker using those exact instructions setup env vars, .etc i tried running `make darkirc.android64' but there's no sdk/ndk package for my distro we have the same thing for musl https://darkrenaissance.github.io/darkfi/dev/dev.html?highlight=static#static-binary-builds Title: Development - The DarkFi Book is the equivelant to using contrib/docker/static.Dockerfile so you can create the dev env inside a docker and use that to build for android instead of your host os https://agorism.dev/uploads/diff.txt this is the code yeah so create a docker builder and invoke the make target to that you don't need to mount iirc yes you need to mount no actually you do yeah check how the builder thing works: https://github.com/aggstam/librewolf-source-installer?tab=readme-ov-file#using-docker Title: GitHub - aggstam/librewolf-source-installer: A helper script extracting and installing a packaged Liberwolf source tar archive. you have the docker file which describes your building environment, so you init it once and then you invoke make targets directly from it the example I send is a docker builder to compile librewolf from source, without setting up anything on your local os (other than having docker obviously) has anyone managed to get this all working in an ubuntu docker container? I got ./ircd working, but couldn't get weechat to connect, I think I know the issue also I've found a bunch of broken links throughout the darkfi book, did you guys want me to list them here or create an issue on github? deki: you can fix them and open a PR This ^ okay will do, I'll do it in the next 2-3 days because there's a few, plus I need to leave soon...for reasons !topic next tasks Added topic: next tasks (by brawndo) hi hihi Hello holla !list Topics: 1. tx fees client (by brawndo) 2. tx fees different for issuance vs transfer (by aiya) 3. next tasks (by brawndo) hihi hello Shall we start? !start Meeting started Topics: 1. tx fees client (by brawndo) 2. tx fees different for issuance vs transfer (by aiya) 3. next tasks (by brawndo) Current topic: tx fees client (by brawndo) okay so tx fees are implemented in the validator and in the tests ACTION cheers The idea is that tx fees require 1 input and 1 output (no more, no less) to prevent abuse and keep it succint Generally the proof stuff is here: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/contract/money/src/client/fee_v1.rs Title: darkfi/src/contract/money/src/client/fee_v1.rs at master - darkrenaissance/darkfi - Codeberg.org And usage of it is here: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/contract/test-harness/src/money_fee.rs Title: darkfi/src/contract/test-harness/src/money_fee.rs at master - darkrenaissance/darkfi - Codeberg.org Sometimes there will be the case where a user doesn't have a single coin large enough to cover a tx fee Therefore a coin selection algorithm needs to be built client-side, which would enable using Money::Transfer to merge some smaller coins into one and use the new one to pay the tx fee It might get a bit complex, might not, but just wanted to note it down for everyone upgrayedd offered to help out nice, do you know about select_coins()? it's a pretty shit coin selection algo tho, just does greedy selection darkfi/src/contract/money/src/client/transfer_v1/mod.rs:42 is the transfer protocol utxo based or account based? yy but it has to be a bigger part of logic oh it's not even greedy, just FIFO So the client is able to automatise this process some users might want coin control to not spend old inputs? e.g. if you wanna transfer something to someone, but have no coin for fee coverage, then it should know how to build a transaction to optimally use all coins aiya: Everything is utxo aiya: Coin control for that matter does not really matter in darkfi what about tx fees proportional to number of inputs Anyway it will be a tad daunting task But surely can be solved why is it daunting? seems quite ez haumea: There's a lot of edge cases to cover esp since fee calls are position independent And you want to make it optimal making it optimal is another thing ;) but making it work is surely quite ez We'll see :) 16:09 what about tx fees proportional to number of inputs Fees are based on general computation This includes wasm execution, signature verification, and zk proof verification then suddenly number of inputs matter for reducing tx fees imo the best algo would be something like: 1. sort coins in terms of nominal value, 2. iter and grab all coins that cover fees. 3. if coins.len() > 1 { generate a Money::Transfer call first combining said coins into one} 3. Generate fee call using final coin exactly that's good ++ that's the greedy algo use case of miners paying out their earnings to the pool operators aiya: That can be done with a smart contract easily Just have the coinbase coin have a hook to some distribution contract cool Anyway !next Elapsed time: 11.4 min Current topic: tx fees different for issuance vs transfer (by aiya) yeah and then the contract simply does a Transfer propotionally splitting the inputs to outputs the issuance of a contract might be large in size, is there consideration for higher fee when deploying new assets vs transfer tx fees proportional to bytes on chain It already is done that way okay, will check You can look through src/runtime/import/ files to see ty np what are the sample fee for issuance vs transfer as set rn in terms of DRK wdym issuance? deploy We haven't settled yet new asset deployed with a smart contract We have the gas units used, but need to settle on a divisor to denominate it in DRK need to consider reducing spam asset created to bloat the chain\ dust attack per se fee pricing depends on resource usage and not functionality You can run some of the contract tests in debug mode and you'll see the gas used for various functionality I believe what aiya means is like in eth where deploying bytes on chain become more expensive as its size grows so the VM has no concept of "issuing coins" or "making transfers", it's just a bunch of functions got it, yes like for example "verify zk proof", "set a value in the DB", "perform signature check" what is the block size limit (those are the src/runtime/import/ that brawndo mentioned) and frequency of blocks mined upgrayedd: is that true? haumea: iirc correctly yes is gas pricing changing depending on blockchain size? how does that work? I might be mistaken tho don't quote me on that XD We've yet to decide aiya: there is no block size limit in terms of bytes haumea: As a metric you have the gas units used per tx/call we have an artificial 50 txs(excluding coinbase) limit right now but we will probably change it to gas later gas pricing depends on transactions in last block and mempool, need to review haumea: Then you select a factor/divisor over the gas units to get some DRK value aiya: yeah thats an inflationary token since you also burn % of the gas we_don't_do_that_here.jpg (yet) brawndo: yeah upgrayedd said eth makes gas price go up as blockchain gets bigger lol okay, will the fees be flat in darkfi then? and depend on number of inputs and outputs? that seems like a bad design aiya: it depends on what the tx is executing different functionalities in its calls have different gas got it haumea: again don't take what I say for granted that might be me missreading something like 3-4 years ago i won't nw aiya: Forget inputs and outputs, it's just functions in wasm as haumea said aiya: So it's the general computation you're doing ^^ I wrote this zip to combat zcash dust attack, but it was not enforced at protocol level so the spam continued for a year https://zips.z.cash/zip-0317 Title: ZIP 317: Proportional Transfer Fee Mechanism nice i know madars haumea: btw I said deploying gets more expensive, as deploying a new contract, not general gas ah ok, that's a bit better but that might also have to be with general eth value zec doesnt have functions tho, so its different for drk, but still need to consider spam attack angle spammer still pays fees, as they are not flat so whats the issue? the main issue i see with cost estimates for fees is what the ideal computer looks like, and how expensive ops are and whether you are aiming for best case pricing or to prevent worst case performance yeah, if the incoming transactions can affect the network in any way can someone explain me the spam definition in that context? i guess there's no theoretical answer, just have to look how the network responds and modifying pricing accordingly because all I see is someone using the network what defines it as spam? upgrayedd: It's locking up bandwidth for legit txs spam as in easy to slow down the network for honest actors, with low $ spend who defines their tx as not legit? we do we are borderline talking censorship here so fuck that if thats what you imply Yeah spam attacks are used for censorship That's why tx fees exist in the first place upgrayedd: thats the rules for tx fees we define :) i mean we make a choice by allowing some ops to be cheaper than others that's a kind of value judgement about which actions we prefer, or which types of computers we're targetting anyway I might not see something right now, lets not divert ok idk calling it outright censorship seems like misrepresenting the argument price to use something is not censorship if the mempool becomes full and regular tx take more time to confirm, it gets problematic anyway fee pricing: lets put something, make it easy to benchmark, run tests, see what works and modify accordingly saying something is spam for no aparent reason other than its existance is censorship thats what I'm saying doesn't have to be perfect but anyway we can discuss later just need to make sure the fee framework is fair to proportion of computing resource used we can modify it accordingly (same with coin selection .etc - make it work then improve it) ++ ++ draoi: how is the network stuff btw? seems pretty stable ok we still need to test it in adversarial conditions yes since most of our node conns are always on : test : test back test back draoi: have you checked the lilith log? !next Elapsed time: 16.4 min Current topic: next tasks (by brawndo) there is a bunch of send_version() errors we need to do network hardening soon, assuming current network tasks are done Does anyone have any tasks they need my help on? (Same for others) We will be busy with crypto audit this week likely dasman: what are you working on? dasman: what errors do you mean? i notice this: App version: 0.4.2, Recv version: 0.4.1 so there are ppl running different versions mostly dag sync but i'll soon push to a new branch and talk to upgrayedd what's up with the dag sync? i also deployed tau couple of days ago and now i'm seeing events validation errors so working on that ok speeding it up :) can we get a tui tool to view the event graph like dnetview? so when events are dropped or missing, we can easily examine nodes see #25 in https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book sure, i can work on that draoi: i notice this: App version: 0.4.2, Recv version: 0.4.1 yes that brawndo: looks like most of the tasks on that page are down we did it in a month, pretty good outstanding tasks: - fix drk - p2p hardening - src/runtime safety review that's the main things left. I'm missing something? - finish darkfid rewrite ok darkfid/drk (miner listening for blocks from the network and a new sync mechanism) how are the blockchain works? wdym? is the merge mining with xmr working, and the blockchain/consensus stuff draoi: but i see some of version errors are from my own ip, even tho both peer and myself are on 0.4.2 blockchain/consesus is pretty much done re darkfid: right now it works as a single miner pushing blocks and other nodes listen to it lilith is still on 0.4.1 on master so maybe that's why darkfid doesn't mine blocks, it sends requests to minerd(script/research/minerd) (bin/lilith/src/Cargo.toml) that is going to be the main mining node, so thats where merge mining will happen hah maybe since darkfid is agnostic of how the block was produced, it just sens a request to minerd and then gets a nonce back nice so its almost there(TM) :D oh and for sync we will do a backwards syncing logic we as connected peers for their canonical tip is the darkfid network protocol done or will it change? pick the most common, and then ask for the headers backward till our last known i want to review it after headers sequence diverges, we grab the blocks then apply DoS hardening oh brilliant i love backwards sync logic that's the best :) it will change due to that sync logic right now we do forward syncing which is wrong yes it is the most usable for wallets yy you can also have checkpoints, so you don't even ask for tips you expect them to have said checkpoint its like what we have in eventgraph now ok when the network protocol is 90% then lmk but its always a single parent then we can add a resource manager to p2p code yy will ping it will need testing so ok and when is the ETA for needing to fix up the drk tool? needing? well it's broken now due to various changes throughout the code I reckon before text testnet release next-testnet.com? ++ waiting on darkfid stability so i can update the drk cli tool but also want to go through all the commands since many of them are quite adhoc rn me_too_kid.jpg and improve them well you don't care about darkfid behind the scenes you can run a node and test drk stuff directly against that will it work now if i do that? you mean i can start working on this? like will the txs pass? yes oh kewl I told you we have block production its just a single node that's doing it XD ok thats great i'll set them up I can help you to set it up or create a script or something like local-dev-node.sh nice, we have the audit so will be around for that, but then after will switch to this task yy no worries, more time to work on darkfid behind the scenes stuff btw brawndo: should we move minerd to bin? or keep using it from research since its still wip/experimental ? sounds like it belongs to bin/ yeah anyway continue/end? end, brawndo fell asleep !end Elapsed time: 19.6 min Meeting ended gg everyone tnx bros gg thanks everyone o/ ACTION waves 🔫 bye test test back sry got cut off 16:49 end, brawndo fell asleep 16:33 so there are ppl running different versions lol was joking nw Missed between these two msgs check the logs agorism.dev/log asked if we should move minerd to bin, or keep it in research since its wip/experimental ty is the merge mining with xmr working, and the blockchain/consensus stuff No merge mining yet ok thx : @dasman pushed 1 commit to eventgraph-dev: e8d349cbab: eventgraph: divide missing events between connected peers and request concurrently upgrayedd: check this ^ out please if you have time, it doesn't really speed things up, maybe a little bit with 200k :) : @dasman pushed 2 commits to master: e8ce57e81c: bin/tau: add default hostlist path : @dasman pushed 2 commits to master: ea50f9ac5e: bin/tau: remove commented code : gm gm what does merge mining with xmr mean? Are you going to use monero as a mining token? bbl gm deki: like ltc and doge merge mining, when two coins have the same mining algo, the mining can leverage finding a block in both chains and increase rewards for the miner for darkfi, it means the existing monero miners can join mining drk with some config changes and contribute to decentrlize mining these mining pools will receive the rewards for finding the next block and reward their pool operators with profits from both xmr and drk, some pool operators can decide to payout only xmr to their miners or provide an option to the user to decide which coin/rewards they prefer for the merge mining work will we be able to mine drk from the Monero GUI Wallet? pancake: can you mine normal monero using that wallet? There is a mining mode under advanced tab pancake: can you mine anything else other than monero there? no, only option is solo or P2Pool and CPU Threads aha so since they don't have anything else, I doubt they will add DRK also thats a question to make to their devs team, not here... was within the realm of possibility : ) gm pancake: It could work if you'd use P2Pool pancake: https://github.com/darkrenaissance/darkfi/issues/244 Title: Merge mining with Monero · Issue #244 · darkrenaissance/darkfi · GitHub SChernykh is working on a unified API for merge mining cool Does the Monero GUI allow you to merge mine using p2pool? I don't see any options indicating that ah nvm Yes it does wait haha nio dislexia it said manage miner im excited for the unified API Maybe worth researching if they plan to support it Or if it already is supported, just hidden :D what is the best method for contacting xmr devs? https://github.com/SChernykh/p2pool/blob/merge-mining/docs/MERGE_MINING.MD #monero-dev on libera IRC I suppose https://libera.chat/ Title: Libera Chat | A next-generation IRC network for FOSS projects collaboration! https://git.sr.ht/~ireas/rusty-man Title: ~ireas/rusty-man - Command-line viewer for rustdoc documentation - sourcehut git : @skoupidi pushed 4 commits to master: 6de4869bec: darkfid: removed obselete protocol_block : @skoupidi pushed 4 commits to master: 38a83c8b40: darkfid: renamed consensus_p2p to miners_p2p : @skoupidi pushed 4 commits to master: 34b750dc5e: minerd: moved from script/research into bin : @skoupidi pushed 4 commits to master: f1f05b726d: darkfid: created task to listen for appended proposals and perform finalization check for non mining nodes https://gupax.io/ is a simple monero GUI for p2pool mining as well, if looking for other options brawndo Title: Gupax biab : @skoupidi pushed 1 commit to master: 847e4749eb: darkfid/task/miner: properly listen for network proposals Merge mining is coming to Monero p2pool, and GUI wallet has p2pool integrated, so probably will be able to mine DRK thru monero GUI, though it is not possible now nice ty waffles : gm gm hi afk : @skoupidi pushed 2 commits to master: 44103f0359: darkfid: use base64 encoding | drk: minor fixes : @skoupidi pushed 2 commits to master: f345f7a338: contrib/localnet/darkfid-singe-node: updated to work with latest darkfid : @skoupidi pushed 1 commit to master: af5542da7b: drk: parse coinbase transaction : @skoupidi pushed 1 commit to master: 67692cf354: drk: minor scan fix : @skoupidi pushed 1 commit to master: d4af12f264: drk: another minor scan fix what are the plans for light clients to connect to nodes and sync with the chain? is light client support possible on the upcoming testnet? aiya: can you define what a light client is? (compared to a normal darkfid node) one that does not store the blockchain data and only cares about own transactions, like a mobile wallet then drk is a light client already it doesn't store blocks, it asks them from a darkfid daemon instance and parse them to update the state okay, and how does drk connect to a remote darkfid daemon jsonrpc zcash has lightwalletd that uses grpc for performance https://github.com/zcash/lightwalletd Title: GitHub - zcash/lightwalletd: Lightwalletd is a backend service that provides a bandwidth-efficient interface to the Zcash blockchain thanks, is there a doc spec for the jsonrpc interface? https://darkrenaissance.github.io/darkfi/clients/darkfid_jsonrpc.html Title: darkfid JSON-RPC API - The DarkFi Book neat next challenge is to get drk to work on android https://eprint.iacr.org/2024/188 Title: HomeRun: High-efficiency Oblivious Message Retrieval, Unrestricted OMR backend setup needs high resources when scaling, see benchmarks https://github.com/ZeyuThomasLiu/ObliviousMessageRetrieval Title: GitHub - ZeyuThomasLiu/ObliviousMessageRetrieval incentivizing OMR server operators is another challenge, unless there is a way to pay per use HomeRun is a lot more efficient than those old ones Doesn't use FHE or TEE hmm : @skoupidi pushed 1 commit to master: d54e44b573: contrib/localnet/darkfid*: updated to work with latest darkfid b b : @dasman pushed 1 commit to master: 95e7a53094: event_graph: fix events validation with days_rotation set to zero okay now tau is good to go seeds = ["tcp+tls://dasman.xyz:23331"] there are 6 dummy tasks if someone give it a shot and synced, just so you know :) : @parazyd pushed 1 commit to crypto-fixes: 7280b434fc: sdk/mimc_vdf: Generate round constants as BigUint instead of u64 : @parazyd pushed 1 commit to crypto-fixes: d63bce3a7a: sdk/ecvrf: Enforce that the public key is not the identity point I have ircd and tau set up on a vps. The darkfi folder is abt 10gb which takes a lot of space. Is there some advice for less resources/space to be needed. can some things be removed? anon: You mean the git repo? anon: It is possible to build static binaries on another machine and then just upload them to your vps: https://darkrenaissance.github.io/darkfi/dev/dev.html#static-binary-builds Title: Development - The DarkFi Book hey Then you won't need that 10GB of build artifacts Hi haumea my vps crashes trying to build i have to upload bins Yeah easy to go OOM If it's a 1GB RAM machine or whatever I have an Alpine LXC set up like in that doc I linked Then I just distribute the bins If you use Gentoo it's also possible to do it natively, but a bit more involved haumea: btw I'm pushing the crypto fixes to a separate branch called "crypto-fixes", so when you make any changes, please also push to that branch It will be easier to review that way when we're done ok dasman: do i use tau-python? i enabled RPC but nothing shows > Error: Connection Refused to '127.0.0.1:23330', Either because the daemon is down, is currently syncing or wrong url. : @parazyd pushed 1 commit to crypto-fixes: bb8bb1b828: chore: Add supply-chain to main .gitignore... ty brawndo : @zero pushed 1 commit to crypto-fixes: f19a4abdec: schnorr: add the pubkey to challenge hash of commit : @zero pushed 1 commit to crypto-fixes: 8baff2b00a: improve prev commit, by actually allowing hash_to_scalar() to take a Vec You should prefer taking &[&[u8]] over Vec<&[u8]> when possible Also please squash the commits relevant to a single fix, feel free to force push ok Thanks : @zero pushed 1 commit to crypto-fixes: 2d4c730a3a: schnorr: add the pubkey to challenge hash of commit ok thanks, did force push much better Cool haumea: when updating fn definitions don't forget to update the doc comments On it : @zero pushed 1 commit to crypto-fixes: 42fe1ea23d: sdk/schnorr: add the pubkey to challenge hash of commit oh oops XD i just auto-ignore all comments haumea: See I actually meant to do it like this: https://github.com/darkrenaissance/darkfi/blob/crypto-fixes/src/sdk/src/crypto/schnorr.rs#L69 It's a tiny optimisation, but it'll accumulate when we do it throughout the codebase Just a ref slice opposed to a vec By not using a vec, you avoid allocating on the heap ahh yes, you're correct i should've known that XD its all pointers after all always_has_been.jpg XD ah but the transcript will have to be a vec if we use it for the nonce generation too Where? let mut transcript = ..., and then for each new public value, we push to the transcript in the schnorr sig part (not the function hash_to_scalar()) i even thought about making a struct for it with methods like "write_point(), write_base(), squeeze_challenge()" but that's overkill I'm not seeing where you mean that happens SchnorrSecret::sign(), right now we just use the transcript for challenge, but david was saying we should also use it for mask too (altho that's my interpretation of what he said) Yeah he's saying something about deterministic nonces, but I'm waiting for a reply so in that case transcript would be a mutable vec = [pubkey, msg], then nonce = hash_to_scalar(transcript), then we make commit, transcript.push(commit), then challenge = hash_to_scalar(transcript) like that so nonce = hash(pubkey, msg), and challenge = hash(pubkey, msg, commit) which is the fiat shamir transform, but a random nonce is also fine tbh Let's see what he says actually that's wrong, nonce = hash(pubkey, msg, secret) You'd end up reusing the nonce when you sign the same msg twice I'm not sure if that's good or bad the signature is deterministic so it wouldn't matter it's only bad to reuse the nonce when the signature changes, because then you can calculate it from the public values, but since the nonce is derived from public values + secret key, it changes for different messages ok the auth enc (elgamal) needs a MAC using poseidon, then we can encrypt that ++ is poseidon vulnerable to length extension attacks? i guess not since we input the messages *then* compute the permutation, right? Depends on the context i give you: hash(x), you give me: hash(x || y) for some y x is random Yeah that can theoretically happen ok good to know But that's true for any hash function The message must be fixed length i mean for poseidon_hash, where x is a single pallas::Base guess i should've written poseidon_hash(x) and poseidon_hash(x, y) Yeah those two hashes can be the same in theory i don't mean the same What then? i give you poseidon_hash(x) where x is a random pallas::Base, and you can modify the hash (without knowledge of x), to create poseidon_hash(x, y) this is possible with SHA and other hash functions, but idk if it is with poseidon_hash() https://en.wikipedia.org/wiki/Length_extension_attack Title: Length extension attack - Wikipedia oh I see Give me a sec to read the wiki i think maybe it isn't due to how the permutation works, but that's just a guess, not informed opinion blake2 is immune fyi (according to google) It says you can do HMAC instead of MAC it might not be needed though because i think poseidon is immune to length extension attacks sec I had a hackmd somewhere with filecoin talking about posieon nice, we also don't really need the MAC in ElGamalEncryptedNote, because the note is verifiable in ZK, and anything we're encrypted corresponds to something in ZK like a coin but it would be nice to add for additional security to ensure encrypted values cannot be modified (eventhough the ZK is preventing this) https://github.com/darkrenaissance/darkfi/blob/master/script/research/poseidon/poseidon.sage#L67-L71 altho tbh, why would you encrypt values if they aren't enforced in ZK? then they could correspond to anything and it would be meaningless so it's nbd ok so With constant-input-length hashing it's padded with zeroes to a multiple of RATE Then the constant length is encoded into the capacity element Which means inputs of different lengths don't share the same permutation This sage is equivalent to the zcash impl nice ic, that's good so the permutation poseidon_hash(x) is completely different than from poseidon_hash(x, y) Yeah which means you cannot just apply on top of x to get (x, y) But that works only because of encoding the length into the capacity elem https://github.com/darkrenaissance/darkfi/blob/master/script/research/poseidon/poseidon.sage#L78-L80 L = len(messages) excellent thats cool af ty np noice i forgot about this code, gj It's very simple once you read it a few times :D ah here's the hackmd: https://hackmd.io/@7dpNYqjKQGeYC7wMlPxHtQ/BJjaxXd9U Title: Poseidon in Filecoin - HackMD Dunno if relevant I think it's for their specific Poseidon implementation nice, one day we can go through the one we use and customize this hackmd for our book (crypto section) i added sth to the spec, but just kinda hand wavy https://darkrenaissance.github.io/darkfi/spec/crypto-schemes.html#poseidonhash-function Title: Cryptographic Schemes - The DarkFi Book I trust that the Zcash implementation is good im sure it is They're probably the most pendantic devs in the space :D : gm haumea: you probably have rpc_listen set to a value different than the default, either change it back to 127.0.0.1:23330 or use -e with tau tau -e 127.0.0.1:1234 tau being an alias for tau-python $ ./main.py Error: Connection Refused to '127.0.0.1:23330', Either because the daemon is down, is currently syncing or wrong url. rpc_listen="tcp://127.0.0.1:23330" that's inside ~/.config/darkfi/taud_config.toml so what am I doing wrong? taud running? yes pgrep taud works odd ah it works, one sec you sure you use the same config? ah ok cool :D weird i tried compiling again, but i swear i did that this morning ok i commented task 2 noice commented on it too i see it, nice ;) one concern tho, tau tree only grows, maybe we should prune only stopped tasks that are older than couple months? I'm getting "collect2: fatal error: ld terminated with signal 9 [Killed]" when trying to compile taud. anyone have any idea why it is happening and could direct me to what needs to be done? 1) WHAT what's your setup? You probably don't have enough RAM on the machine you're building on vps, 1 core , 20 GB disk brawndo: I think you are right it's such a small one after all I'd advise compiling elsewhere Try this perhaps: https://darkrenaissance.github.io/darkfi/dev/dev.html#static-binary-builds Title: Development - The DarkFi Book brawndo: ty will try this reka: we also have the corresponding docker file if you prefer that https://codeberg.org/darkrenaissance/darkfi/src/branch/master/contrib/docker/static.Dockerfile Title: darkfi/contrib/docker/static.Dockerfile at master - darkrenaissance/darkfi - Codeberg.org upgrayedd: will take a look at all this and see what might be the best solution. Thank you very much for all proposals and resources you can run this locally on your machine, you don't need to run it on a server anymore : gm : Greets : @zero pushed 1 commit to master: 8a17b7175e: fix darkirc android build using dockerfile aiya: try this now ^ Did you even read the makefile? There's native Android instructions (I just hate Docker really) :D + Introducing an redudant dependency in Cargo.lock (which breaks book gen flow since its not commited) yes i did but there's no android NDK or SDK package on my linux distro and it seems the Makefile wants me to install android studio which is equally (or moreso) worse, then install NDK/SDK through the gui at least with docker it's segregated and i can purge everything once done also idk how it works for you, but i need to install sqlcipher and do loads of manual config to get builds to work (see the dockerfile) i think it's ok for cross compiling, just not for native builds upgrayedd: about the dependency, that's not redundant but needed for android builds since otherwise openssl cannot build which is required by some dependency [target.aarch64-linux-android.dependencies] openssl = { version = "*", features = ["vendored"] } haumea: thein either sed it in the make target and then restore it after build it's saying to use the vendored openssl rather than the default or commit the Cargo.toml the Cargo.toml is committed, but i didn't commit the Cargo.lock Its redundant in the sense that its a arch target specific config therefore special handling is needed s,Cargo.toml,Cargo.lock however you prefer, but what's wrong with just using [target.aarch64-linux-android.dependencies]? its not wrong, the commit is incomplete since Cargo.lock was changed ok adding that then I'm just saying that there is a better way to handle target specific stuff so Cargo.lock doesn't need to accomodate them : @zero pushed 1 commit to master: 75ad829ff8: Cargo.lock: openssl how? i mean i'm open whatever is the best way to have android builds hell you can even add the sed in the builder itself but since you mount the repo folder to build from you can't do that so you can sed in the make target as to not polute the main Cargo.lock for a target arch specific definition dasman: I tried to add a test task to tau. Is it visible for you? haumea: why you need a different make target than the one already there? _aarch64-android when you already have $(BIN).android64 they are different commands being run i couldnt get the other one working my android skills are too weak haumea: finally works! i haz a darkirc.aarch64-android build aiya: <3 nice upgrayedd: the Cargo.lock thing is nbd, check the commit diff, and it doesn't get added to non-android builds it just says "for android builds, when openssl is used, use the vendored version". it doesn't touch any other code I don't think so darkirc package definition contains openssl i could add echo "...." >> Cargo.toml though if it's necessary, but then i have to cp Cargo.toml Cargo.toml.bak, run build, then cp Cargo.toml.bak Cargo.toml again so it should be pulled as part of the build tree hence my redundancy comment [target.aarch64-linux-android.dependencies] openssl = { version = "*", features = ["vendored"] } this is the lines added yy I saw iirc cargo pulls everything defined as a package dep, and then use the target specicif ones when building so you always have openssl in your dependencys tries, regardless of native/cross compiling s,tries,tree I know I'm nitpicking, but you know that random crates existing in dependencies tree might give shitty future headaches https://github.com/darkrenaissance/darkfi/blob/master/Cargo.lock#L2120-L2149 see the package definition? it contains openssl, while it shouldn't yep so what should we do? btw we can also remove docker too, but just we need to test on unstable conns, so android build is good haumea: if you only care about 1 time builds(no active development) you can make the docker like we have the one for riscv https://codeberg.org/darkrenaissance/darkfi/src/branch/master/contrib/docker/riscv.Dockerfile Title: darkfi/contrib/docker/riscv.Dockerfile at master - darkrenaissance/darkfi - Codeberg.org this copies the repo folder into the docker, applies all hacks, build the binary and then transfer it back to host so we have 0 impact on repo configs/etc for that target specific build that needs hacking ok will do this later, about to head out good to know glhf ty reka: i don't see your task, are you on darkfi-dev workspace? reka: btw you can delete the old ~/.config/darkfi/taud_config.toml and use this instead https://agorism.dev/uploads/taud_config.toml it's just the default config for taud, but seeds are changed ++ ^ ACTION is afk o/ dasman: I am and changed to se seeds you shared. Will change config and see how it works, ty. dasman: sry for the poor writing lol will ping you when this is done. : yo : test back : test : test back test back reka: np :) dasman: tried to add a task again. I'm using taud deamon with tau-cli. Correct? please tell me if it worked. oh no, use tau-python ++ oops sry I am using tau-python don't see your task you're using default config, just the seed is changed right? also could you share taud log? the seed that was shared in the chat is the same as in the agorism.dev/....I didnt have to change anything one sec dasman: https://pastenym.ch/#/DrW9qvg_&key=93cfad5ab6a4017ccf743a07691fb78c Title: Pastenym yeah it's the one nothing weird happened can you do: ./taud --refresh and try again ++ wait, could you share the log from the very beginning? yy one thing at time lol, no rush just refreshed and restarted and tried to make a new task can I see the full log? https://pastenym.ch/#/qQe2JjRo&key=74ad7ea4c23585d4d49a3cfeb1fa210f Title: Pastenym ugh it's taking forever to load <5> hello all, what's the size of the current testnet blockchain? 5: 2.5 GB : Toist : Nice having seemless connection : Test : Test back Test back : Test Test back b is running ``` ./drk scan ``` not possible without a wallet ? sorry, asking one more question: what's the hard disk requirement for syncing the testnode blockchain data ? hi vdz4 u need to be running darkfid for `drk scan` to work however, rn we are v close to releasing a new testnet we do not recommend trying to run the old testnet rn : gm : what's the event graph time limit again? not getting the msgs i sent yday from phone relayed altho i would expect to : also i get this: : #dev Mode #dev [+] : multiple times : btw rn it seems darkirc is the only /bin on version 0.4.2, should we downgrade to 0.4.1 and then upgrade everything to 0.4.2 when it's all ready? : to avoid version mismatches gm dasman: tried again, rm config and new was generated. changed seed. if you see the task please tell me. If not I can share logs again and hopefully it's faster to load this time :) Hello draoi. Thank you .. hey gm dasman: added 2 tasks you see them? gm haumea: yes I see them reka: yes please I couldn't load the logs from yesterday, not even now : draoi: re:events time limit, it's 24hrs, at 00:00:00UTC : hmmm yeah i didn't get replay from the various test messages wuz sending yday : it's not 24hr window btw : hard reset on midnight urc : utc* : ah ok that's why : ++ : i did some dis/reconnect testing, switching internet on, off etc and seeing does it reconnect. sometimes it reconnected, othertimes seed node rejected our connection (with the same Connection Refused error as before) : yeah same : sometimes it would instantly reconnect but then another time it wouldnt reconnect until i restarted the node : i can try to reproduce that seed node issue locally, need a bit tho as working on some other stuff rn : same for me, is it related to the seed being greylisted or anchorlisted : no it should not relate to the hostlists : ah ok cool : hostlists are independent of whether connections can be made etc : ok i think i know what it is actually : actually nm, need to think about it more : so we have this quarantine logic where if an outbound node disconnects we try to reconnect to it N times, and after N times we add it to a list called rejected() : and if it's been added to rejected then we will not accept connections from this node : however, this shouldn't apply to seed nodes since they are just inbound sessions and aren't doing outbound quarantine stuff : lmk if you need seed log for debugging : i can't work on this rn as doing other stuff : dasman stopped task (BINgPG): test task 5 : noice : dasman stopped task (BxkH6u): tau <-> irc bridge : dasman stopped task (mKKwJr): test task 1 : dasman stopped task (1HworA): test task 2 : dasman stopped task (J71N1e): test task 3 : dasman stopped task (YJ9k1s): test task 4 : dasman stopped task (fesFBi): another test task : dasman added task (OVjOqH): seed reconnection dasman: https://pastenym.ch/#/cYh10isW&key=e59567d17b87ed8c4593dcf82e9fe88a Title: Pastenym reka: everything looks normal, except this: 15:25:15 [WARN] [EVENTGRAPH] Peer tcp+tls://dasman.xyz:23332 sent us a malicious event but you synced successfully after that can you show me the command you're using to add a new task dasman: I do this --> python3 main.py add nameoftask @nameofresp +whatevertag you should get an editor open after that command to write a description if everything is well you should see "added task 'title'" or you can use desc:description with your command hello, I am working towards my first changes to the repo and have fixed some warnings wrt unnecessary casts and path prefixes, matching the order of "impl" members with the trait and other changes like replacing deprecated methods like 'max_value' to MAX where can I make a pull request for review? aiya: codeberg preferably, but you can use github as well dasman: I do get 'added task...' and it is visible for me in the task list. Just seems like the syncing isnt working out. upgrayedd: ty my android client keeps failing to connect with the default nodes, what other nodes can I add to connect with? dasman: ^^ also, is the issue related to nodes going offline for long and getting rejected as a result? can lilith distinguish between trustworthy nodes and light clients that join occasionally? lilith should clean up the lightclients, and even if it sends them to a new peer they would add them to graylist and reject them after failing to connect, so I don't think there is a problem there how does a lightclient fail to reconnect? my client hasn't been able to connect even once wdym lightclient? my darkirc on android :/ thats not a lightclient also default seed is incompatible with current master a custom seed is used for testing when android force closes connections for apps, like battery management and stuff, my node will go offline adn hen reconnect when i check next what build it is? latest master darkirc? yes default seed in the config is incompatible with it that seed is for older versions you have to use the testing seed and/or add a peer directly reka: since you're able to sync and see your own task, it should be in the dag, but neither haumea nor I see yours, so maybe you're on a different workspace (possibly you removed the wrong config file) aiya: seeds = ["tcp+tls://dasman.xyz:5262"] ^ darkirc seed ty. ill add dasman, also will work on running own permanent seed node, how often does the lilith code change? how often will it need to be deployed I'd say rarely :) upgrayedd: ^ whenever a p2p change has been pushed like the one currently tested also we need more people using tau seeds = ["tcp+tls://dasman.xyz:23331"] dasman 5262 failed for me, trying 23331 now wait 5262 is for darkirc 23331 is for tau oh, what is tau encrypted task management p2p app how did 5262 failed tho? gotcha, found it on the darkfi book says connection refused can you try now? dasman: I'll check again dasman: could you share the agorism link again please? now it failed saying cant finf any DAG tips reka: https://agorism.dev/uploads/taud_config.toml aiya you have to remove existing db aiya: do you connect to some peers? like dasman.xyz:26661? : test test back : test back dasman: ty :) np conn failed again, bbl, ty dasman dasman: checked cofig. tried to add task again. can u see it? aiya: np, lmk if it fails again, I can connect fine reka: tau bot is running, there's no notification so your task is not passed ty dasman: will try to figure out what's wrong here dasman: do I need to run .taud --piped? reka: that's for taubot, no need for normal usage bbl dasman: ++ b : dasman commented on task (yWx0dR): event graph tool greets, my internet was d/c : hihi gm gm : @zero pushed 1 commit to master: 7fc5889ba2: book: add section on dev : @zero pushed 1 commit to master: bcb1c5a439: book: add section on dev pls no force push to master oh sorry i tried to sneak that in won't do it again :D ;) It's a bit scary One of the ways to introduce malicious code yeah i agree, i thought maybe everyone is asleep e.g. putting something bad in an old commit aha ok (That's why I initially had force-push to master forbidden, but we had to enable it bcos mirroring) it should cause a conflict when you git pull then you'd see the old commits changed Assuming you have the repo locally Someone cloning anew would get rekt yeah, i wonder if this is still possible with tags Yes Tag just references a single commit Ah I see what you mean You can publish the commit hash somewhere And make sure the tag is that commit i'm asking in #git git tags don't guarantee this telegram is not proprietary btw wut Yes it is https://telegram.org/faq#q-can-i-get-telegram-39s-server-side-code Title: Telegram FAQ ok we're safe, because commits contain the previous commit hash when you cherry pick, it changes the hash haumea: That's all under assumption you have a trusted repo When you don't, you'd clone anything git cat-file -p 75ad829ff85f5753feff that's the entire commit data brawndo: yeah you are right, I though they had the server code along with the clients, noice good to know so when you publish a "safe" commit (say one that's been audited), you cannot rewrite the previous history 11:45 You can publish the commit hash somewhere correct : ACTION waves : yo : @zero pushed 1 commit to master: 9f1649ef02: book: changes to hiring based on commentz adi: https://github.com/narodnik/fagman Title: GitHub - narodnik/fagman: Facebook-Apple-Google-Microsoft-Amazon-Netflix super app ty new lib versions are available for smol, monero, arti-client, tor-rtcompat, tor-hscrypto, libsqlite3-sys, wasmparser how often are new releases reviewed and adopted? I was looking to update the ahash dependency to v0.8.8 as that fixes the stdsimd missing issue on mac, but the library relying on it(cranelift-egraph) is stuck with ond version *old made a PR https://github.com/darkrenaissance/darkfi/pull/251 Title: Fix warnings and refactor code by nighthawk24 · Pull Request #251 · darkrenaissance/darkfi · GitHub aiya reach me once you're in the design channel upgrayedd: I cleared the dm under ./local/darki and my android darkirc still fails with connection refused when trying to connect to dasman 5263 and 26661 ports not sure why 'connection refused' errors are happening, i'm trying to reproduce locally but if you stop and restart the node it should connect aiya: do you prefer PR comments here or in gh? : I'm in darkirc! : hola aiya: it's 5262 PR comments 5262 and im in ty coolz : welcome to the dark side! aiya: before proceeding, I have to say something smells really fishy about that PR so I will ask right away is this some gpt/co-pilot refactor? no, there are manual changes applied following my IDE warnings potato potato I'm intrigued to decline right away but I will help you to understand why the fuck that is wrong okay first of all, you have to explicitly define "warnings" IDE warnings are usually shit and/or outright wrong When we/you refer to warnings, we usually mean native compiler stuff, not some random 3rd party lib that you personally use did you read the PR description? second and more importand, you didn't seem to follow normal dev procedures, meaning that the code doesn't build and steps are missing hmm, what steps are missing? aiya: yeah I went through it, hence I'm here you never specify what "warnings" mean I won't shit that much, but I will just say most(if not all) of the "refactor" is pure garbage which you should have never pushed since on a first glance you should have caught that its not building is there a preference of how the existing code is set? wrt defining the crate and function names, casts, etc? is there a styleguide I can follow for the project? make fmt is for styling make clippy is to verify your code works (none of them were executed, I cloned locally to check) but anyway the problem is deeper refactor is outright wrong in most places clippy failing is odd, I will double check its not odd in most cases you removed explicit crate definitions if clause { return something } else { return something_else } shouldn't be refactored as return if clause { something } else { something_else } but it should be: if clause { return something }; something_else what if both if and else are returning something? why not use return if why the else if they both return something? the only correct "refactor" thing I see is changing match clause => return something to return match clasue => something its okay if there is a preference for the project to keep separate return statements, coming from kotlin coding im used to simplifying statements\ the goal is to make code clearer, and immediate return is the fastest way to do it its a common code pattern okay btw my favourite one: https://github.com/darkrenaissance/darkfi/pull/251/files#diff-ff256e9a4a745e481efe242cd50f33d0e1c99ecb0297b1584903f6bfa65d4f02L306-R527 Title: Fix warnings and refactor code by nighthawk24 · Pull Request #251 · darkrenaissance/darkfi · GitHub but why? ah, that was unnecessary, I will revert that and add back the explicit crate definitions make clippy is your friend also: https://darkrenaissance.github.io/darkfi/dev/dev.html?highlight=fmt#cargo-fmt-pre-commit-hook Title: Development - The DarkFi Book the doc is a bit outdated tho, I would just use make fmt okay aiya: thats for now, I hope you don't see me as babayaga or something, I know I'm harsh, but tough love is the best to git gud fast thanks for reviewing and sorry for the trouble, I'll follow the pre-commit hook fmt : greetings, all : aiya: I am also on the dark side. Looks like things are pretty healthy here now. : good : darkirc and lilith been running stable for a while now on my side as well <3 : should I run a taud seed too? : @parazyd pushed 1 commit to master: 4dd4cf5178: sdk/crypto: Add test unit for pedersen.rs : gm : errorist: yes i believe help is needed to test tau and run tau nodes : gm : draoi: ok, cool, will do : exit can I use this config: https://agorism.dev/uploads/taud_config.toml ? and should I set version 0.4.1 or 0.4.2 in lilith for taud? seen some comments above about versions so not sure thx set the version in lilith to whatever taud is (i think 0.4.2) you can check the taud version by reading bin/taud/Cargo.toml or perhaps running ./taud --version gm gm gm nice catch by david, i think it's a general problem anywhere we use nullifiers. will fix later today solution is to put all the nullifiers into a HashSet, and check the nullifier_set.len() == inputs.len() HashSets are prone to errors I opted for vecs which are a bit less efficient probably, but more legible and easy to write it's just to check the items are all unique transfer_v1.rs:156 oh nice, i forgot this : https://pastenym.ch/#/8n11jNab&key=8915bf2883aa90656105a5d6547c2c5f : Title: Pastenym Title: Pastenym : getting this ok i'll copy your code haumea: oh also note hashsets are not deterministic haumea: So it might cause edge cases in wasm e.g. one node uses different gas than other It's better to prefer BTree in this case : 08:54:42 [ERROR] Event 6770f21c482bfe1d5661b25319958df9c80eba785a4755a0d71b026b1c76a7d0 is invalid! : 08:54:42 [ERROR] Failed syncing DAG (Event is invalid), retrying in 10s... : not sure what that i : *is : draoi: is there some local DB I should delete and retry? : i think taud has a built in command for this : run ./taud -h : there should be something called 'refresh' or similar : ./taud --refresh : can't remember the exact command : ok thx, will try ok the vec is good too Yeah On the safer side :) So re: nullifiers wrt. DAO That means we have to snapshot the nullifier set? At what time? Or is it irrelevant and we just look at Money's nullifier set? i think it's fine as is, just need to add this missing check cloning the money nullifier set is expensive, and the only downside is: - voting opens - i send money to you - i can no longer vote on proposal - (you cannot vote either since we cloned the coins before voting opened so we're good) *cloned the coins tree *nod* oops, i have the check in DAO::vote() already, just not in DAO::propose() : @zero pushed 1 commit to crypto-fixes: f93e93e3f2: DAO::propose(): check input nullifiers are not used more than once - each one should be unique. Ah there we go you guys might find this interesting: https://github.com/dbus2/zbus?tab=readme-ov-file#readme Title: GitHub - dbus2/zbus: Rust D-Bus crate. i like the way dbus allows introspectable services, but it sadly uses XML as the interchange format iirc what's wrong with using XML? cool lib tho have you ever used XML? it's super verbose https://en.wikipedia.org/wiki/XML#Criticism Title: XML - Wikipedia nope :) just know of it as a markup language xml is hell on earth lol well the jury has spoken its all fun and games until you use it for communications between apis in production latest version: 2006 lol i worked on a project which used XML as a scripting language with tags like , .etc and the conditions as string attributes *for a project (wasn't my idea) : @zero pushed 1 commit to master: ce517f331c: book/dev: remove section on ChangeLog !list !topic test gm, had a power cut overnight damn brawndo: are all your systems back up? Yep 🤘 test test back !list No topics hihi hello holla Hello today is special it's the day to clean your screen but.. but.. my screen is clean well done :D irc on waterproof eink? well... gg? hello !start Meeting started No topics :D hows it going with the net stuff, any soln on the reconnect stuff anyone have issues with tau and darkirc? I saw darkirc is stable for people using it? dark-john reporting some msgs being picked up by the mirror bot but not his client yeah when im outside reconnecting is great, but sometimes it gets stuck in a state where it cannot reconnect s/reporting/reported we need a tool to dump event graph buffers + ppl send us logs haumea: i've been trying to reproduce that error locally (connection refused errors), will report any findings also reka reported they can broadcast thier tasks on tau, idk what's missing can or can't? can't lol i haven't tested tau yet please do if you have time did reka share any logs? Any configs to use? there are two tasks currently draoi: yes https://pastenym.ch/#/qQe2JjRo&key=74ad7ea4c23585d4d49a3cfeb1fa210f Title: Pastenym idk if it's still valid tho the link i mean just curious if there was anything in the log that would explain why tasks weren't broadcasting brawndo: https://agorism.dev/uploads/taud_config.toml ty draoi: found nothing honestly yw event graph debugger would be useful i have a big doubt about their workspace dasman are you working on that? feel free to copy paste dnet code yes i am, already started nice thanks Any updates? ACTION ^^ We'll get audit reports for the zk vm and gadgets this week Then going to fix a few minor things in there they're doing good nice quite satisfied with their audit contracts also getting audited by other ppl Dunno when that report will be in i think it's ~10 days i need to put some effort to bring in more devs Cool going to hit up some foss communities All the good devs already doing smth https://lainchan.org/%CE%BB/res/38069.html#q38325 Title: /λ/ - Programming Employment test test back heh upgrayedd: how is darkfid doing? haumea: good, I'm in the process of testing consensus with multiple nodes nice looking forward to it exciting https://www.fossjobs.net/ Title: Free & Open Source Jobs - fossjobs.net they are all like: "We encourage people of color, indigenous people, LGBTQIA+ people, women, ..." hey just getting connected to ircd hello SIN welcome SIN i guess indigenous means native indians, so US centric hello will read along hihi all screens are clean ok well keep it up o/ glhf or gjhf !next Elapsed time: 28472598.6 min No further topics I guess that's it Q so I'm running in to errors under src/consensus/state.rs but can't find the file in my project dir! error: written amount is not handled --> src/consensus/state.rs:380:13 | 380 | writer.write(&count_str.into_bytes()).unwrap(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Ah, looks like the meeting is over. I'd like to pitch something aiya: There is no src/consensus/ hmm, I'm erroring out when running make clippy https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src I have a long-running ideal about how DAOs should work. DAOs for now exist mostly as a concept, but to me, they are nothing else than just Decentralized Voting Orgs Title: darkfi/src at master - darkrenaissance/darkfi - Codeberg.org aiya: maybe on v0.4.1? I would love to be allowed to build that on darkfi. The idea is to allow people to come together anonymously and work together on anything, including money-making projects - hence real DAOs dasman: ugh, you're right thx np :) welcome loopr This would require some "primitive" in darkfi DAO design Currently you can only post a proposal and vote thanks draoi loopr: anywhere i can read more about the idea? or can you be more specific aiya: when you update a branch from master, you should rebase its commits to be after head, do you know how to do that? like what about current contracts would need to change to support this upgradedd: ok, will follow rebase policy. I merged the latest changes in my branch on GitHub Not really, but I could write something together. There is a book which a bit describes the idea in different terms: https://slicingpie.com/ Title: Slicing Pie | Slicing Pie, The World's Only Fair Startup Equity Calculator So basically the idea is to create some sort of distributed accounting for orgs You contribute, and the contribution results in compensation for the org aiya: its not a policy, its how you keep the git tree clean ah interesting loopr loopr: That's some kind of meritocracy? loopr: that sounds to me more like contract businesss logic handling, so its more like proposal to execute something haumea has ideas about how to build an anon credit network, idk if related I don't see how the "primitives" correlate to that It's an open problem how to "calculate" contributions as long as a proposal allows you to execute a contract function, you can do whatever There was a project called source cred which is now dormant, which wanted to do just that loopr: are you a dev? if so i'll tell you how to build it Too many ppl wanted to do that we have all the pieces, just needs to be assembled But you need some kind of policing, or AI always and indeed, brawndo, absolutely right, on the other hand, I think it is a missing peace of distributed orgs Agree haumea: I am all ears. Yes, I am a dev the challenge indeed lies in solving that holy grail nice fyi https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#employment Title: Contribute - The DarkFi Book want to exchange keys? [contact."narodnik"] contact_pubkey = "Didn8p4snHpq99dNjLixEM3QJC3vsddpcjjyaKDuq53d" so loopr: there's a linux tool called ledger-cli, it allows you to do double entry bookkeeping I think AI could be used to some extent But unsure what to train it on there's also this book https://maa.org/press/maa-reviews/algebraic-models-for-accounting-systems Title: Algebraic Models for Accounting Systems | Mathematical Association of America we have the p2p network / event graph hang on I had to reset my laptop, I still have to generate my keys for exchange it's something i want to make, just didn't have time yet but you can have a record of anything including time or "contribution" when i make an exchange to you, we have to both approve it it could be a simple command line tool with a text file format right. so one approach could be distributed qualifying of a contribution, and then taking an average. that requires people to do it, and it becomes slow, introducing new challenges anyway eventually when the p2p has swarming, the event graph will become generic that it can be used in python as a widget (since event graph doesnt care what the data is) isn't that what money is for? we create banks and credit .etc not following https://agorism.dev/uploads/screenshot-1708356819.png i used to work for a catalan cooperative they had sub-coops each focused on different areas of activity each one would issue their own credit the coop would buy the credit, and so control prices this way the coop is an intermediate with the outside world (handling exchange .etc) but everybody could purchase freely within the coop ofc for that you need a good accounting system, ideally p2p (the one i worked for had a web platform for this) yeah sounds like CIC? yeah it's the CIC correct they had a good model but poor execution, and not a good understanding of economic theory yep, a widespread issue There are a lot of good ideas around, most fail they focused too much on agriculture, should've focused on tech and industry agriculture is low yield absolutely, and very few people want to work in agriculture anyways these days My suggestion is to use some kind of tokens which accrue based on the contributions you made. Those tokens then proportionally give you ownership of the pooled equity of the org. Equity can be anything, can be stuff or money. But for simplicity, assume a project works for money, and makes 1k USD worth of profit. A has 10% of tokens, B 60% and C 30%, so A can take 100$, B 600 and C 300 Of course the challenge lies exactly on how to fairly account for those contributions There could be a project charter, which defines upfront some "constitution" (but can be changed via voting later): e.g. dev work gives you a factor of 1.5 on your time, community mgmt 1.4, documentation 1.25, providing food 2 etc. So a dev works 1 hour and makes 1.5 worth of tokens etc. This is not my idea, and I don't want to flood the chat with this. If there's interest I can write it up somewhere and share. Also, I don't want to push this suggestion only, I think it should be way easier for people to work together over the internet, empowering everyone to become independent, and possibly long term revolutionize the work world. That's my driver. So I am happy to work on other proposals if that fits a shared vision. Nice !end Elapsed time: 32.0 min Meeting ended loopr: we are working on the base layer this is all possible with the DAO design right now: voting to call functions and set policy but we have no bandwidth to work on complicated products just yet. we need to ship mainnet we even need to make a wallet also i'd start small and build incremental... what's the MVP for this? i think it's the tool i described which we can expand over time hintjens.com/blog:19 of course, I have 20 years of dev experience, I know how to work iteratively. I'd say the MVP could be record contributions and calculate one's share. everything else can come later. [contact."loopr"] contact_pubkey = "4rzHWemAB35pLjGZeKeCdGYKRa3ZG5QNRGcrJecwjgU3" > Tari and Townforge projects also considering merge mine with Monero, but not completely due to concerns with potential 51% attacks https://www.townforge.net/proof-of-settlement Title: Proof Of Settlement | Townforge https://rfc.tari.com/RFC-0132_Merge_Mining_Monero.html Title: RFC-0132: Merge Mining Monero - The Tari Network: RFC library brawndo, upgrayedd: ^ from a friend Yeah old stuff Already know all that aha great https://github.com/darkrenaissance/darkfi/issues/244 Title: Merge mining with Monero · Issue #244 · darkrenaissance/darkfi · GitHub good news: darkfi crypto passed the audit with (mostly) flying colors https://agorism.dev/uploads/darkfi-report.pdf \m/ \o/ congrats <3 <3 <3 + many thanks to upgrayedd helping resolve several build and test issues on my system loopr: ok added you now sent you a DM test haumea: So which ones do you wanna fix? :p haumea: answered, thanks i can fix all of them, will take all of 30 mins lol or be my guest if up to the challenge, happy to review I can do it tmrw, will push to the branch congrats on the audit :) : Great news on the audit! Are further audits for other parts of the codebase planned? : https://zfnd.org/frost-reference-implementation-v1-0-0-stable-release/ : FROST multisig is a game changer for long term self custody and DAO ownership gm, can I get people's input pls: I've been going through the Rust book & doing the 'rust by example' exercises, but also trying to learn ZK proofs at the same time, and going through the darkfi book, and now groth16, and I feel scattered. I feel like I should solely focus on Rust, but then I'm not learning actual cryptography principles so idk what to do I'm up to ch6 with the Rust book, I've been going through a lot of it at work because my manager is trying to ship me off to another project, so I technically have no 'tasks' to do other than fix compilation errors deki: I'm in the same boat, so leaning more towards getting hands on with rust than core cryptography deki: get good at rust, then later do the zk because once you know rust well, then you can get your hards dirty right away on using them (without knowing how they work) : dark-john: yeah we need to audit the rest of the code... searching for a good auditor yeah that's what I figured is the best path, just because Rust is statically typed and I've become more used to the likes of Python : also we need to make more changes first too like hardening p2p layer in that case I'll put all the crypto theory stuff to the side for now, and just practice Rust, thanks all aiya: checkout Rustlings on github, you go through exercises to fix errors, it's actually a lot of fun kinda like a game it depends what you want to do, many cryptographers can't even write code but they write math (which is another type of code ;) but if you want to be a crypto dev, well i'd do the dev part first but with a focus on using crypto your objective: get good at rust, which means: deki: checking - focus on learning the rust book. some of the later chapters are optional, but chap 1-6 and 8-10 are mandatory (then skim the rest) - then write code, lots of code until rust feels fluent. first you will do lil side projs to get comfortable, later you can try to submit patches to darkfi - lastly you can do more ZK stuff once you pass the prerequisites thanks for the input, I just want to become a crypto dev and contribute to this project/learn more about cryptography principles in the process re the book: I was going to go up to ch20, but if beyond 10 can be skimmed, I'll do that there is also a really good book called 'rust for rustaceans' which is more advanced but actually explains granular details about how rust works as a language, it's a great reference for when you are further along another user on here (antikythera) suggested: ZK proofs, groth16, bls signatures, elliptic curve group law, and finitie field, then I'd be ready ty draoi rust for rustca.. is more for after you've been working as a rust dev for a while and have a solid grasp of the basics I see : wrote an extended guide for running darkirc on android https://gist.github.com/nighthawk24/8072ac75feb8c46bcb1b8bf14165ae87 : Title: darkirc on android · GitHub Title: darkirc on android · GitHub aiyad: termux on graphene os can only be run as the primary user (which is for my part only used as admin for security reasons and instead have multiple users). Do you have any advice how to solve this? anon: I'll check back when I have access to my graphene OS device, are you planning to run darkirc under a secondary user and sandbox activity there? aiya: I just want to run darkirc on another user than primary. Thank you for checking :) gm aiya: There's a FROST impl here already https://github.com/darkrenaissance/darkfi/tree/master/script/research/frost It's on the simpler side haumea: So do we want a MAC in the elgamal function or just rename the function and add a doc comment? maybe both, the MAC can be another function/struct on top of the unsafe one so i guess rename for now ah I'm into it already It's difficult because const generics Right now leaning to impossible, rather than difficult the safe version calls the unsafe one, by just sending it [x1, mac(x1), x2, mac(x2), ...] then also wrapping .decrypt() and checking mac(x[i]) == x[i + 1] Yeah it's not possible in Rust At least not by keeping the current API what is not possible? Doing the MAC Because poseidon is const generic can you show me what you have? Sure https://termbin.com/3bi8 ty oh you don't need to do that each value can have its own mac That's not quite elegant encrypt(𝐱: Kⁿ) → Kⁿ⁺¹ sorry i mean: encrypt(𝐱: Kⁿ) → K²ⁿ encrypt(𝐱) calls encrypt_unsafe(𝐱') where 𝐱' = [x₁, mac(x₁), x₂, mac(x₂), ...] okay I'll add 2 new functions then it can be another struct ElGamalSafe/Unsafe likewise decrypt(𝐜') calls decrypt_unsafe(𝐜') → 𝐱' then checks xᵢ₊₁ = mac(xᵢ) for i = 1, 3, 5, ... then returns 𝐱 = [x₁, x₃, …] let s be the shared key, then macₛ(xᵢ) = poseidon_hash(xᵢ, s, i) i was thinking about this and that's my current idea for API change, lmk if you think of sth better brawndo: well tbh we don't need the safe version so it might just be extra code that isn't used which is why renaming them might be the shorter easier option Yeah likely where later we can make the wrapper if needed I'll just add the comments but it's verified and used inside ZK so not needed altho it's something to watch for when needing verified values... always check the decrypted values are valid (i.e. they exist inside a coin). the values are used in ZK so modifying the ciphertexts makes the ZK proof unable to verify so it's fine Yep If we ever need it we'll add it ez ++ But ok I found an impossible thing in Rust :D don't like writing unused code Yeah things rot just also you don't know what is needed ahead of time polkadot wrote so much bloat one thing also that's not in the report is that we're often not cheching that a PublicKey is not the identity I think we should be able to fix that directly in keypair.rs So it applies everywhere secret key would have to be 0, and pubkey = (0 : 1 : 0) which in compressed form i think would be xx00...00 where xx can be 00 or 01 it would be a self own, but not an attack, possibly good to have anyway since it's weird : @parazyd pushed 6 commits to crypto-fixes: 632f7d7b58: sdk/crypto: Add "_unsafe" suffix to ElGamalEncryptedNote functions... : @parazyd pushed 6 commits to crypto-fixes: 89cff07469: sdk/crypto: Use deterministic nonces for Schnorr signatures : @parazyd pushed 6 commits to crypto-fixes: c191eb2b9c: validator/utils: Add FIXME note about dangerous code : @parazyd pushed 6 commits to crypto-fixes: 96c6f6fa65: sdk/crypto: Derive a deterministic nonce for ECVRF : @parazyd pushed 6 commits to crypto-fixes: 0bc4d0a2d3: contract/deployooor: Remove redundant ZK proof creation and verification : @parazyd pushed 6 commits to crypto-fixes: 763a335716: sdk/crypto: Forbid PublicKey to ever be the identity point haumea: Ok, up for review :) Starting from 7280b434fc9b5fe56bf0e46c41d4f6af1eda3a0b I think we can drop d63bce3a7a239f7c9137d6eac40cad8d1a6c5e42 since 763a3357163def8441e935f3fcc57e973ee51ff4 enforces it Whoops I introduced a mistake ah no it's fine lmk when you review, I'll then rebase and merge if all gud : @parazyd pushed 1 commit to crypto-fixes: 02c35b727e: fixup! contract/deployooor: Remove redundant ZK proof creation and verification hey was cutting chicken ok looking oh nice we can remove OsRng from 89cff0746930313a5d3c let mask = hash_to_scalar(DRK_SCHNORR_DOMAIN, &[&pubkey_bytes, message]); should be: let mask = hash_to_scalar(DRK_SCHNORR_DOMAIN, &[&secret_bytes, message]); (because mask must be private, and the pubkey/message are public info- my mistake before when i said hash(pubkey, msg)) ok, fixing : @parazyd pushed 1 commit to crypto-fixes: 045ee88bc8: fixup! sdk/crypto: Use deterministic nonces for Schnorr signatures Yeah I did it right in ECVRF but not here :D : @parazyd pushed 1 commit to crypto-fixes: 644158c13e: fixup! sdk/crypto: Forbid PublicKey to ever be the identity point for ECVRF, it's correct what you're doing but you could also hash gamma, or hash x, message, .etc (not talking about the rest of the code, just nonce generation) It's like this in the RFC ok nice H is already the message btw yeah true well it's a commitment to the message, and k is a commitment to a commitment to a message i will look more closely at deployooor later, but i trust it's fine tbh ok thanks Lemme squash these fixups ok yeah the rest is small things looks good : @parazyd pushed 5 commits to crypto-fixes: b69aeb9ffa: sdk/crypto: Use deterministic nonces for Schnorr signatures : @parazyd pushed 5 commits to crypto-fixes: ef1a39cf69: validator/utils: Add FIXME note about dangerous code : @parazyd pushed 5 commits to crypto-fixes: 29addb004d: sdk/crypto: Derive a deterministic nonce for ECVRF : @parazyd pushed 5 commits to crypto-fixes: 1ff100dae5: contract/deployooor: Remove redundant ZK proof creation and verification : @parazyd pushed 5 commits to crypto-fixes: 25d3bf7950: sdk/crypto: Forbid PublicKey to ever be the identity point ty : @parazyd pushed 11 commits to master: 60fc2f0b3d: sdk/mimc_vdf: Generate round constants as BigUint instead of u64 : @parazyd pushed 11 commits to master: 8636fb2641: sdk/ecvrf: Enforce that the public key is not the identity point : @parazyd pushed 11 commits to master: 41266d7fd6: chore: Add supply-chain to main .gitignore... : @parazyd pushed 11 commits to master: ff24d41a10: sdk/schnorr: add the pubkey to challenge hash of commit : @parazyd pushed 11 commits to master: b218255aa1: DAO::propose(): check input nullifiers are not used more than once - each one should be unique. : @parazyd pushed 11 commits to master: 23f2bbeac9: sdk/crypto: Add "_unsafe" suffix to ElGamalEncryptedNote functions... : @parazyd pushed 11 commits to master: e33fb55faf: sdk/crypto: Use deterministic nonces for Schnorr signatures : @parazyd pushed 11 commits to master: 6002324810: validator/utils: Add FIXME note about dangerous code : @parazyd pushed 11 commits to master: b82aee8cd3: sdk/crypto: Derive a deterministic nonce for ECVRF : @parazyd pushed 11 commits to master: dc203e12f0: contract/deployooor: Remove redundant ZK proof creation and verification : @parazyd pushed 11 commits to master: 47e9d68ef1: sdk/crypto: Forbid PublicKey to ever be the identity point \o/ sick Now the zkvm stuff... the 2 things i'd fix if possible would be fixing the vote thing (need set exclusion gadget) and allowing DAOs to make swaps for the set exclusion, that would mean changing money::transfer() so each node builds a sparse merkle tree of nullifiers which can easily be forked I'll see about the gadget after the zkvm audit I need to study this: https://github.com/young-rocks/rocks-smt (and IIRC it had bugs) Title: GitHub - young-rocks/rocks-smt: A Sparse Merkle Tree circuit constructed with Halo2 poseidon. how are people running their ircd daemons? I assume running one on a vps could be helpful? Yeah I run mine on a server at home Kinda dodgy to keep secrets on VPSs brawndo: you mean like the plain text keys I guess In the config files, yes yeah doesn't look like ircd is very resource intensive, so prob could run on a rpi Yep definitely Though later on there will be ZK proofs needed when we introduce spam protection Although I wouldn't introduce that unless/until necessary It's a chat after all cool it's just that I usually switch off my laptop at night, a long running ircd would be better : test : test back test back : FYI ... the "can post to darkirc and it shows up in Telegram mirror, but not seeing messages from others" is still happening on my computer. This was after a shutdown and restart of darkirc and weechat this morning. So there has been no "catch up" since I last ran. Only seeing my own messages. : But I see my posts do make it to others, as they appear in the Telegram mirror. : administration added task (8pIkWJ): test. assigned to reka finally : administration stopped task (8pIkWJ): test test back : test back dasman: I tried to setup taud on my computer and on a vps, with the same steps and config. It runs on vps but not on computer. It seems like syncing isn't working. Can it have anything to do with the capacity of my computer or what might be the reason for this? reka: check your firewall that's the only think I can think of rn damsn lol dasman: I will ty damn son! XD XD seems unfortunately that this is not the problem you're running taud without "--config" right? if yes, please check ~/.config/darkfi/taud_config.toml if the workspace secret is the correct one also if you have more than one workspace, listing task will tell you which workspace you're on rn, check if it's the correct one before adding your task dasman: it is only one workspace and I checked with the one you shared. It's the correct one.. I run without --config okay let's go through this step by step check dm ++ gm : oh that's weird gm gm greets both : gm : we need some way of replaying the event graph : so it's like we log the events coming in, then when someone has an error, they can send the data bundle (debug log, event graph replay, current event graph state), then we can accurately debug errors : isn't that what dasman is working on or am i confused : this is different : ah ok : i thought it was this + a UI to inspect : maybe they can be merged : the inspection tool is : narodnik added task (SjJ2OA): event graph replayer. assigned to dasman : narodnik reassigned task (OVjOqH): seed reconnection to @xeno : narodnik added task (iaJCUw): allow using shortened refids instead of index. assigned to dasman : what's the seed for tau? : narodnik added task (nebREV): when creating a new task, show its index. assigned to dasman : narodnik commented on task (iaJCUw): allow using shortened refids instead of index : it should be the default : it's not, but i found the correct one in #dev logs : aha kk : we should have a read only key too : idk how to enable users to submit tasks, possibly they could just submit it manually to us on here and we submit the task : but certainly having a read-only key would be nice : narodnik added task (kVNZYs): read only key so users can view tasks. assigned to dasman : narodnik commented on task (yWx0dR): event graph tool : new tau is comfy :) : yeah works well but we need more tooling and improvement around event graph : also that net fix too : there's also work on hardening the net code Untrusted deserialisation too is that referring to the buffer alloc or sth else? : also magic bytes should be customizable per app Yeah and generally I think it can be manipulated into crashing nodes For memory reasons I'd rather go for making it slower and safer over fast and flaky ok : @parazyd pushed 1 commit to master: 13c6d35e38: contrib: Add script for generating ctags nice gonna try You can add it also as a commit hook nice works well $ nvim :tag DaoInfo :tag make_mint_call although :tag DaoProposal::to_bulla doesn't work That's not how ctags work You'd match on to_bulla and get offered multiple results to_bullasrc/contract/dao/src/model.rs/^ pub fn to_bulla(&self) -> DaoProposalBulla {$/;"Pimplementation:DaoProposal it says implementation:DaoProposal at the end so it should be possible https://parazyd.org/pub/tmp/screenshots/screenshot00375.png the tag contains the struct name as a field so it's possible to jump directly *shrug* -\_o_/- or how was it ¯\_(ツ)_/¯ "D yeah! lmao : @parazyd pushed 1 commit to master: ffcefada18: sdk/crypto: Remove unused import in ecvrf.rs : have been testing darkirc reconnect at different time intervals : 1 min offline: reconnected to seed fine : 5 min offline: reconnected to seed fine : 1 hr offline: reconnected to seed fine : i think it's just the number of times you go offline, no? : next gna try to max out the seed node inbound connections : because then it triggers the blacklist logic : do you mean quarantine? : yeah sth like that : i restarted the node before and it suddenly connected : which indicated to me it was not the problem with remotes but with local logic/state of the node : quarantine is like: try to connect to outbound, if it can't connect, try again N times, if it still can't connect, reject https://agorism.dev/uploads/screenshot-1708514086.png from https://math.ucr.edu/home/baez/rosetta.pdf kinda interesting that a program is like a logical proposition, and 2 parallel processes are just non-dependent logical statements i wonder if this has relevance for zk/anon contracts somehow (since we usually think of things in terms of tautological statements rather than execution of a program) haumea: There's the struct NullifierAttributes haumea: I think we should remove it, and pass OwnCoin around, since it always seems used in the client APIs OwnCoin has too much data inside it I added a 'fn nullifier()' to impl OwnCoin It doesn't matter, you're passing a reference i'd actually remove OwnCoin, or alias it to NullifierAttrs No that's bad one sec, looking for a commit You're doing it in TransferCallInput, which is literally OwnCoin yes but it's not always OwnCoin Yes it is OwnCoin has too much data which isn't used in other places It does not matter You're introducing redundant structs just to have a lossy version of OwnCoin one sec i can't find the code in commit log (too deep), but it was the functions from before in test-harness doing gather on OwnCoin That stuff is gone there was some places where we didn't have MoneyNote It's everywhere now I fixed it aha thanks OwnCoin and NullifierAttributes are semantically different. it's better to have them separate in fact, OwnCoin::nullifier() should use NullifierAttributes.to_nullifier() rather than calling poseidon_hash() directly that way if we ever change the format of nullifiers, it's in one place we can update easily MoneyNote is a very large struct to copy around also MoneyNote could just be: CoinAttributes, memo (and we delete the value/token blinds) You can't "change" stuff like that You need versioned structs And in fact it makes it simpler to refactor and upgrade when we just contain it all in OwnCoin(V1) MoneyNote is already copied around NullifierAttributes didn't help with that (Nor is it really relevant) : @skoupidi pushed 1 commit to master: 0de966c9c7: darkfid: forks sync logic implemented i'm just thinking because i see that it might result in deleted code and simplify things but the other side is that one is the client, the other is the model, and in general it might be good to always have XAttrs -> X for any bulla/commit in code No you only need the Type you could say the same about CoinAttributes too you know in postgres when you have a table which contains fields, i had a friend who would do foo(foo_x, foo_y, ...) for the members of a table foo i named things similarly in .zk as well so it was clear what is a coin's attribute or dao proposal attr .etc coin_x, proposal_y, ... then for each commit/bulla made a corresponding FooAttributes struct in the model.rs file as a convention so if we delete NullifierAttributes then it would be breaking that convention what do you reckon? Anywhere where you're using NullifierAttributes, you have a parent OwnCoin, which you converted into TransferCallInput for example And you again copy the note into TransferCallInput ok well go for it if it's better So it's just more code that essentially does the same thing ++ thanks for looking np I'm going through things : neat! :D : gm : @parazyd pushed 1 commit to master: 2a49b2d79e: contract/money: Remove NullifierAttributes and use the OwnCoin API See how wherever it was used, it also uses the note : dasman added task (SVIc5e): prune very old stopped tasks. assigned to dasman haumea: It makes sense to me, the Nullifier can only be created if you have its secret, meaning you have to have the OwnCoin haumea: CoinAttributes is different, as it does not require any previously known values, but rather enforces the API and gives knowledge of what a Coin contains So I think the two do not ft the same schema/convention s,ft,fit, : made a darkirc seed to try and catch that seed reconnect error: "tcp+tls://anon-fore.st:5262" : would appreciate if ppl use this when testing hi, are there actually people from the old lorea.cc times in here? just curious yes me the other guys are all retired now with families, they "grew up" oh I see i'm never growing up haha I remember hellekin, he seemed quite committed too... yeah but he's not a dev, and plus he's anti-crypto actually brawndo is from that group too ;) i mean hellekin is a dev but more into web, these days he's doing activism type things aah ok ok btw I was known as tawhuac back then i was genjix had a lot of content on lorea.cc oh I think I remember you there too :) i used to work closely with caedes but now he works on some shitty crypto exchange all the best great devs i know are semi-retired, and the good ones work a normie jerb like bank or exchange .etc the lure of money is strong gn what was lorea.cc? : @dasman pushed 2 commits to master: 2a8ba89001: bin/tau: accept shortend refid as an identifier for tasks : @dasman pushed 2 commits to master: 0ecdff598d: bin/tau: show task index when creating a new one : dasman stopped task (iaJCUw): allow using shortened refids instead of index : dasman stopped task (nebREV): when creating a new task, show its index : draoi: connecting to your seed: 23:46:40 [WARN] [P2P] Failure contacting seed #0 [tcp+tls://anon-fore.st:5262]: IO error: connection refused : 23:46:40 [WARN] [P2P] Seed #0 connection failed: IO error: connection refused : 23:46:40 [ERROR] [P2P] Network reseed failed: Failed to reach any seeds : tysm dasman gm gm anyone read into this? https://security.apple.com/blog/imessage-pq3/https://security.apple.com/blog/imessage-pq3/ Title: Apple Security Research no idea what "post-quantum cryptographic protocol" is, haven't read the full article gm gm how does one verify apple's implementation and check for backdoors : Added anon-fore https://media.tenor.com/PO289wuDoPgAAAPo/you-don%2527t-invincible.mp4 : weirdly it seems to connect fine over ipv6 but not ipv4 : lmk if ppl encouter any correlation between internet protocol and connection issues draoi: your isp may block certain ipv4 ranges Flexing with ip6 huh :p :D yes could be upgrayedd global east poors lol also that might be why you couldn't reproduce certain connection erros with the seed You know what's weird lmao I'm googling for rust code auditors And all the results are crypto/smart contracts wtf happened to legit devs lmao brawndo: they rewrite foss https://www.google.com/search?client=firefox-b-d&q=rust+code+auditing Title: rust code auditing - Google Search smart contract, smart contract, substrate, crypto who else would pay for rust code audits? broke ass fossniggas sucks i remember working for 5 years when our project got funded for 25k. we thought it was a lot of money good knows is if we can tap into those communities then it's a wellspring of devs i was thinking to do a hipster suckless "modern linux" anti-systemd type prevent to HCPP prevent? some of these telegram linux meme communities have thousands of members The only thing I found is this https://www.radicallyopensecurity.com/ Title: Non-Profit Computer Security Consultancy draoi: i meant pre-event They're funded by nlnet oh nice ah yeah nlnet's nice, we got funding from them Andy Tannenbaum is an advisor lol lmao!! Maybe we should ping them harry recommended a german firm, they used them https://www.radicallyopensecurity.com/non-profit/ Title: Non-Profit but my friend matt says you don't want firms, you want skilled individuals into formal verif and code audits also least authority or trail of bits are standard options Yeah I'd take anything at this point well i'll ask around a bit, lets see if david responds with recommends : @zero pushed 1 commit to master: 9c802e0745: book: add page arch/wallet.md we also need a bit more time to tweak things to perfection like we did we the crypto code : @zero pushed 1 commit to master: 5101de7160: book: fix broken URL notes on wallet design: https://darkrenaissance.github.io/darkfi/arch/wallet.html Title: Wallet - The DarkFi Book ^ interesting for everybody It's great Least Authority is well respected, I'd also recommend Defuse for audits if he is available https://defuse.ca/software-security-auditing.htm Title: Software Security Auditing ty brawndo, i also forgot that metamask is non-portable to android .etc (at least trivially)... will also add a roadmap like: - get drk working, then wasm addons (cli only), then modules .etc ofc not delaying shipping mainnet, just tentative slow burner schedule (also added responsive design for mobile/desktop devices) Metamask has a standalone app (browser) on android haumea: you don't need responsive design if you only use the terminal :D thanks brawndo, will add that it needs a separate app which is a downside ty upgrayedd, will also add first class CLI support as core prereq another downside: the wallet arch is specific to browsers, so wallets need to include webkit = huge chunk of code webkit as a browser add-on app only or desktop/mobile native client too? : @zero pushed 1 commit to master: 8ce95eeacb: book/wallet: add feedback from others https://codeberg.org/darkrenaissance/darkfi/commit/8ce95eeacb869059fb45bf9314179cd4f86841e2 Title: book/wallet: add feedback from others · 8ce95eeacb - darkrenaissance/darkfi - Codeberg.org aiya: the metamask mobile app would need to include webkit to enable you to use eth addons https://github.com/audulus/rui-ios proves rust for ios app is possible with mobile UI/scaling handled Title: GitHub - audulus/rui-ios: Demo of rui embedded on iOS haumea: i see miniquad is great yeah, miniquad convinced me it's possible to render fast cross platform ui unlike flutter/react-native https://not-fl3.github.io/egui-miniquad/ Title: egui in miniquad for mobile, its still a bunch of responsibility when working on own designs, without relying on underlying platform's features like scaling font/text, accessibility, and other fancy features expected with a messaging supported app Of course egui is insane "Fancy" features get built over time It should not be your priority for delivering a thing darfirc and wallet with egui like functionality working in a browser window would be epic Reject the browser Why would you cripple your software? :D hah how else do you make the experience universal like a metamask wallet miniquad is cross-platform Works everywhere so native app builds and installers for each platform? Sure and browsers will deep link in the client from browsers? crypto devs primarily start their experience in js website There is no web browser how would a community organize a landing page to invite new members? > crypto devs primarily start their experience in js website That's why they're all bad agree, it sucks is the idea that everyone get the darkfi client and form communities within? Have you ever used Telegram? yes lol So there's an example You see how they have messaging, communities, and various apps you can install within the main app telegram still has incoming links for users to join the groups, users still navigate to web browser or use alternate messaging medium to obtain information on joining a group What's the issue? are those considered the fancy features? I don't follow What's wrong with sharing a URL ? Even on Android you have intents say someone wants to start a community, how will they use the darkfi client? create a new channel, and share it with prospective members to join? will there support for search? How did you get here? :D from an alternate messaging medium Yeah so that's one way i forget this is the dev channel What's that supposed to mean? will discuss UX ideas in design aiya: you're talking about URIs so the telegram URLs are actually a (mis)use of the URI system. In bitcoin you have links like "bitcoin:..." which opens the bitcoin wallet they are easy for us to implement as a way to launch the wallet and access a resource https://en.wikipedia.org/w/index.php?title=Uniform_Resource_Identifier Title: Uniform Resource Identifier - Wikipedia it's not related to the browser but a universal desktop feature telegram just redirects the URL to a landing page with a button to "launch in telegram" which is a button with a URI URI is one thing, search and discovery is another, channels made in darkirc are not public as i understand? this is a public channel yo haumea are you here? hey yeah haumea: you recall how the VRF for block ranking is constructed right? sorry I don't know how the VRF we use works i can look into it if you want or do you mean how it's used for block ranking? wdym, you read the consensus doc, no? yeah I mean how its used for block ranking aha ok yeah ok so the is an issue with currend block rank logic previously we were using pallases instead of integers, so we were sure the modulus will produce something but now thats not the case, since it will most likely produce 0, or the nonce number, unless nonce is 0 you mean vrf_output % nonce? which defeats the porpuse yy that so the question is, do you reckon its safe to use the vrf_output directly as the blocks rank? that will only be 0 if vrf_output = n * nonce for some n or n = 0 that shouldn't happen vrf_output is almost always greater than nonce so vrf_output % nonce will almost always be 0 modulus doesn't work like that, it will only be 0 when vrf_output is a multiple of nonce it sounds like the nonce = 0 oh true you are right nice hehe in early blocks I'm working with smoll nonces mostly 1 XD hence the 0 anyway continuing yeah nonce = 1 is {0} nonce = 2 is {0, 1} nonce = 3 is {0, 1, 2} .etc yeah there is still something missing tho i'll look at this again with pallas::Base replaced by big uint since we are using vrf_output % nonce ain't that mean that same height blocks, extending same fork, will have the same rank? we need to add some differentiator in the mix like block hash or something since that would be unique to each block, regardless what it extends i need to look at this deeper. will also look at the ECVRF, but what you say sounds correct, but still unsure I mean same height blocks with same nonce yeah gotcha but hard to say since idk how the VRF works if it's deterministic from the data at that point, then yes you're correct if there's nondeterminism then it's unsure its deterministic then yes you're correct since we are using n-2 vrf and nonce so blocks that extend same fork, aka having same n-2 vrf what if i make a block the same? if they have the same nonce, their ranking will be the same can i make a conflict? i guess not since the block would be different wdym make a block the same? I reckon we could do, instead of nonce, generate a buint from the block hash bytes so the rank will be vrf_output % that although the modulus is still unsecure, since blocks must have a unique rank for their height so maybe vrf_output + that? the nonce is inside the block header, so it mutates the block hash haumea: yes but blocks for same height with same nonce always have different hash nice that's good since the header includes the tx tree root, which includes the miners coinbase tx so unless they use same secert, the hash will always be different correct, sounds good as a source of randomness and same secret == same block correct ok so changing to vrf_output + block_hash_output sounds good? why not % block_hash? a yeah thats even better since each block will deffo use diff modulus ++ : @skoupidi pushed 1 commit to master: 5e9892363a: validator: changed block ranking logic : @skoupidi pushed 1 commit to master: cff856971d: darkfid: consensus fixes : @skoupidi pushed 3 commits to master: 3a9455e646: contrib/localnet/darkfid*: updated tmux scripts : @skoupidi pushed 3 commits to master: 20117e3e7d: darkfid/proto/protocol_proposal: added missing dispatcher : @skoupidi pushed 3 commits to master: 505571188d: rpc/client: new fn chad_request added haumea: the new rank logic works like a charm managed to have 2 miners both produce blocks and reach consensus on highest ranking fork to finalize its_all_coming_together.jpg \o/ (づ。◕‿‿◕。)づ I see that per default, `outbound_transports = ["tls"]`, but I see ircd trying to connect to tor endpoints as well (and failing, at least on my box) setting `outbound_transports = ["tls","tor"]` seems appropriate, and helped me get connected? loopr: hey, you are connected Yo :) also set up a weechat relay so I now can connect from my phone too gm you could also run darkirc natively on android, but I currently use my phone to connect to vpn and ssh to my server running darkirc and weechat aiya created a guide how to run darkirc on android someone sent me this on here for android might be of interest: https://mirror.xyz/0xbe62F12D86B058566E2438fA9f1c4f800f30F68E/kMAfnA4Smkb0xg8904j8rhkkobaZ6UtK2kiGgCtUxK8 Title: Running DarkFi IRC on your phone — Sam Cunningham The 2nd https://gist.github.com/nighthawk24/8072ac75feb8c46bcb1b8bf14165ae87 this is the recent one by aiya Title: darkirc on android · GitHub ty gm gm errorist: that's not good for darkirc you need to test it running locally so we get the test data it should sync automatically (different for ircd) gm gm anyone know working seed nodes for darkfid? aiya: master o tag/v0.4.1? master master is in development, so no seeds got it, what are the seeds for 0.4.1 aiya: defaults work fine gr8, connected to seed lilith0 and 1, and then warning that node is not connected to other node aiya: don't bother with current testnet, its getting nuked soon (TM) waiting for the next testnet \_(-_-)_/ : @skoupidi pushed 10 commits to master: b468b9c399: net/message_subscriber: new fn receive_with_timeout added on message subscription : @skoupidi pushed 10 commits to master: ed96c06adb: darkfid: ask peers if they are synced on sync task : @skoupidi pushed 10 commits to master: 22c4f2604b: contrib/localnet/darkfid*: tmux scripts beautifications : @skoupidi pushed 10 commits to master: c25df52c64: fud | lilith | research/dhtd/ | geode | tests : Refactor to return match statements : @skoupidi pushed 10 commits to master: 668743b83a: net : Remove unnecessary casts : @skoupidi pushed 10 commits to master: 3d4bd17a2e: net/transport/tls | crypto/constants : Match the order of impl members with the trait : @skoupidi pushed 10 commits to master: ae11be0d20: serial : Fix max_value will be deprecated: replaced by the MAX associated constant on this type : @skoupidi pushed 10 commits to master: 9d2adcd93a: runtime : Use unwrap_or_else to set retdata : @skoupidi pushed 10 commits to master: 0a51a11ed0: serial/derive : Use unwrap_or_else instead of match : @skoupidi pushed 10 commits to master: 4e8e1ea0a2: chore: clippy : @skoupidi pushed 1 commit to master: 5b38b29884: darkfid/tests: ducktaping is tau2 a shared task env or is it rather personal? or, if looking at https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#mainnet-tasks, how to know if someone already grabbed a task? Title: Contribute - The DarkFi Book Ask here if you're willing to pick something up and we can see cool. is the order in the list in any prio or rather just a dump? No order really You can see in the table below that some tasks are taken, but likely help would be appreciated :D ooh duh, just scroll a bit more. sorry No worries lol I think it's not up to date, but it is relevant 5 looks cool to me, but might maybe be a bit too much to start 6 might be better coz it doesn't affect main code directly yet and I have done some 16 but in golang all these are taken though. suggestions? How about 17 ? WIF stuff You could try extending the PublicKey and SecretKey traits/structs BLAKE3 should be used instead of SHA is that WIF page basically a cookbook on how to do it? Yeah I think the right way to do it would be to implement a Wif trait and extend SecretKey with it didn't know wif before. But sounds good Solid entry-level task :) Don't have mastery of the complete docs page yet, but is there some page about the current address format itself? i.e. is it bip39 compatible or something? WIF is for secret keys We currently don't have any special address encoding, but we probably should Currently it's just the base58 encoding of the PublicKey (which is a pallas curve point) I see. Yeah, worth thinking about the address scheme before launch, not something to be changed easily after (although can be done of course, but creates confusion) also, haumea mentioned some instabilities with p2p networking, hinting at ircd logs are those the timeouts and different errors? loopr: ircd is on tag/v0.4.1 so its obselete don't look at it ah so if I'd like to try looking at those "instabilities", because I happen to enjoy p2p networking stuff, what's the best way? Ideally, write a test harness from scratch ^^ Some kind of API that can start/stop nodes, add nodes to a network on-demand, etc. That kind of functionalities So you can easily run various simulations what branch I would target then? master ++ thanks upgrayedd and brawndo yw So it'd be some kind of harness that can be initialized inside of an integration test And wraps the API in https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/net in order to be able to easily spawn a network, arbitrary nodes, etc Title: darkfi/src/net at master - darkrenaissance/darkfi - Codeberg.org Similar in how the contracts test harness does things But I'd make it more dynamic what about grpc? fuck google lol fair enough We have our own p2p stack https://darkrenaissance.github.io/darkfi/learn/dchat/dchat.html Title: P2P API Tutorial - The DarkFi Book There's a tutorial here ++ Sections 27-31 btw I might change my nick, as loopr on github is taken Use codeberg.org ;) We use github only as a mirror tbh I was wondering how you want to fuck google but use on github lol Soon you'll provide a proof-of-vax to Bill Gates to make a Github account yeah loopr: preferably without a condom github is a "compromise" for reachability, we use it just as a mirror. we don't point anyone to use it upgrayedd: I get it. I am not familiar with codeberg, it's under the tor contrib in the docs though. Is the use of codeberg with tor mandatory? Not really, depends on your opsec thought so. and can codeberg do PRs? Yep ++ btw I am still under contract until end of march, hoping to be free a bit earlier though, but until then my contrib will be gradually increasing fuck google what did they do now? the gemini thing? gm, afk today gm, same gm there is an impostor among us andsoitbegins.jpg like this https://www.youtube.com/watch?v=4dAaLbsQSzI Title: Constellation — Official Trailer | Apple TV+ - YouTube even my mental health is fucked by google. i use porn on google. dont use github lol, there it is lmao upgrayedd bot every masterpiece has its cheap copy "i use porn on google" XD child: mom I want upgrayedd!! mom: we got upgrayedd at home. XD i am the masterpiece lol here it is again one can only dream... dream on... hahaha i'm not important enought to have an impostor the bot is delivering content, that's nice porn has destroyed my brain upto the point of no return. so im on meds im autist and superior to normal humans !! can i get some of your droogz hello i found that v0.4.1 darkirc has problems with musl it bypasses requests in torified environments i tested it with both alpine and void musl septic: you don't need to torify it, darkirc has native tor support by v0.4.1 I assume you mean bin version, not repo tag, correct? oh its not darkirc its that ircd old one yes ircd will be obselete so don't bother :D nice out of curiosity, wdym by bypassing? can you state the complete test case? <\ can i connect one windows laptop and a linux laptop using a single ethernet cable by giving them static ips <\ auto eth0 <\ iface eth0 inet static <\ address 192.168.1.2 <\ netmask 255.255.255.0 <\ for example ^^ <\ or do i have to modify the ethenet cable physicallu \ sorry you need an ethernet crossover cable, then it should work <\ hmm thanks upgrayedd: He means that any lib (e.g. torsocks) which uses the LD_PRELOAD hack to hijack connect(2) doesn't work upgrayedd: That's likely, since using musl will statically link the libc brawndo: so torify in general doesn't work in musl? It does if you have a dynamic link to libc aha good to know A library like this is the smallest one I know (I wrote it) https://github.com/wireleap/client/blob/48465449c5e3a20f2d4157cfe1fa90faca9bfa50/wireleap_intercept/wireleap_intercept.c So a program connecting somewhere would use connect(3), which means the ELF has that symbol When it's a dynamic library, it will load that library and use the symbol there (usually there's a table) But then on Linux, you can use LD_PRELOAD to load another library and prefer its symbols, so essentially you replace the function call with your own code That's one of the big arguments against dynamic linking, since it allows you to hijack functions But then again, useful in cases like this > (I wrote it) source: trust me bro :D im getting errors on trying to connect darkirc 05:02:30 [ERROR] [EVENTGRAPH] Sync: Could not find any DAG tips 05:02:30 [ERROR] Failed syncing DAG (DAG sync failed), retrying in 10s... 05:02:40 [INFO] Syncing event DAG (attempt #2) 05:02:40 [INFO] [EVENTGRAPH] Syncing DAG from 0 peers... 05:02:40 [ERROR] [EVENTGRAPH] Sync: Could not find any DAG tips 05:02:40 [ERROR] Failed syncing DAG (DAG sync failed), retrying in 10s... 05:02:50 [INFO] Syncing event DAG (attempt #3) 05:02:50 [INFO] [EVENTGRAPH] Syncing DAG from 0 peers... 05:02:50 [ERROR] [EVENTGRAPH] Sync: Could not find any DAG tips 05:02:50 [ERROR] Failed syncing DAG (DAG sync failed), retrying in 10s... 05:02:55 [WARN] error: tor: remote stream error: Protocol error while launching a data stream: Received an END cell with reason MISC 05:02:55 [WARN] [P2P] Failure contacting seed #1 [tor://f5mldz3utfrj5esn7vy7osa6itusotix6nsjhv4uirshkcvgglb3xdqd.onion:5262]: tor: remote stream error: Protocol error while launching a data stream 05:02:55 [WARN] [P2P] Seed #1 connection failed: tor: remote stream error: Protocol error while launching a data stream 05:03:16 [WARN] [P2P] Failure contacting seed #1 [tor://f5mldz3utfrj5esn7vy7osa6itusotix6nsjhv4uirshkcvgglb3xdqd.onion:5262]: tor: remote stream error: Protocol error while launching a data stream 05:03:16 [WARN] [P2P] Seed #1 connection failed: tor: remote stream error: Protocol error while launching a data stream 05:03:17 [WARN] error: tor: remote stream error: Protocol error while launching a data stream: Received an END cell with reason MISC 05:03:17 [WARN] [P2P] Failure contacting seed #0 [tor://rwjgdy7bs4e3eamgltccea7p5yzz3alfi2vps2xefnihurbmpd3b7hqd.onion:5262]: tor: remote stream error: Protocol error while launching a data stream 05:03:17 [WARN] [P2P] Seed #0 connection failed: tor: remote stream error: Protocol error while launching a data stream 05:03:17 [ERROR] [P2P] Network reseed failed: Failed to reach any seeds 05:03:20 [INFO] Syncing event DAG (attempt #6) 05:03:20 [INFO] [EVENTGRAPH] Syncing DAG from 0 peers... 05:03:20 [ERROR] [EVENTGRAPH] Sync: Could not find any DAG tips 05:03:20 [ERROR] Failed syncing DAG. Exiting. hello https://paste.debian.net/plain/1308516 errors when using darkirc i followed https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html?highlight=darkirc#step-2-build-and-run-darkirc Title: tor-darkirc - The DarkFi Book F?dT#c|P[: the default seeds will not work try this seed: seeds=["tcp+tls://dasman.xyz:5262"] noting that darkirc is not 100% stable rn as we are still testing also in future, plz paste error logs in pastebin etc not in the chat link to pastebin is perf F?dT#c|P[: are you trying to connect with tor only? errorist: yes im trying to connect through tor only drao: doesnt the default tor seeds work? *draoi i'm running a tor seed, try with it: 6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion not sure it will work though, haven't fully tested it errorist: let me try. are the default 2 tor seeds down? yes they are outdated from the prev release draoi: ah okay errorist: is the port 5262 no, it's 25551 give it a try and lmk sure i was able to connect to the tor seed errorist: is it tor-only or also a bridge seed (i.e. supports other transports)? i was not able to get any peers from the tor node even tho i connected to it fine is your hostlist empty? draoi: let me check actually i think it's some other problem sec ok yep it's that hostlist empty after seeding could just be since it's a new seed draoi: so I have this in my darkirc config: inbound = ["tcp://127.0.0.1:31337"] external_addr = ["tor://6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion:25551"] i could connect to the said in logs it said connected. and this in lilith: accept_addrs = ["tcp+tls://0.0.0.0:31337","tcp+tls://0.0.0.0:25551"] *seed where was the path for hostlist? errorist: the path is specified in your config file but here it seems you are re-using the darkirc tor addr for the seed but i get Read error Malformed packet. send_version() failed: Channel stopped oh yeah you should have seperate onions for darkirc external addr and for lilith draoi: this is my hostlist https://pastenym.ch/#/56MXfD8H&key=665998472025a52cfccf3fc768cc477e Title: Pastenym is that lilith or darkirc hostlist errorist? draoi: lilith weird anyway first step is to create an onion for the seed draoi: ok, so I have to put the onion hostname in the lilith config so it can be announced? yes but cannot be the same, has to be a different onion link? sec lemme check my seeds brawndo: can i rename SDKs ecvrf.rs to vrf.rs? yes otherwise you would be receiving all traffic on the same addr and port yep so the lilith accept_addr should be the lilith onion, and that's what you share to other nodes to connect to ok, thx, i'll try and set this up later today keep the darkirc external addr etc as is, that's good as means ppl can connect to you as a peer ++ F?dT#c|P[>: i suggest to retry later when errorist's tor seed has been configured properly okay we're testing a new release of darkirc rn so that's why nodes are only partially deployed etc draoi: i also have this in my config im running a tor address external_addr = ["tor://2tcllnev6t4vlbta5ks6mnfrdn62ibpbf3yuhzxoszd7r3kooo4xvsyd.onion:25551"] nice ty, that's good for the health of the network for tor we are using Arti (rust tor protocols) which is still under development on the prev darkirc release it was mostly working but a bit flaky at times due to arti stuff yes its in the docs ++ tnx for testing, are you a dev? yeah nice, welcome thanks draoi: so I created a new hidden_service dir for lilith under a new port and got a new onion link, put that in the lilith config accept_addrs: tcp+tls://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552" but now I get this in the lilith log: [ERROR] [P2P] Error starting listener #1: IO error: uncategorized error if I do tcp+tls://0.0.0.0:25552 I don't get any errors, but does it get the onion hostname from tor? https://pastenym.ch/#/miOGlXDU&key=a14a9fc53ee02bf2d098bf30aee1170f Title: Pastenym halp :D you have the tor addr inside [] right like accept_addr = ["..."] in the darkirc config? I only have the darkirc one, should I add the lilith one too? no in the lilith config so we're talking about 2 onions, once is for your lilith node, the other for your darkirc node yes, I tried that, but it says error starting listener do you have [] or can you paste that line yes, one sec accept_addrs = ["tcp+tls://0.0.0.0:31337","tcp+tls://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552"] is tcp+tls correct? I tried also with tor:// but it says invalid transport it should be tcp actually since tor connections happen on localhost via the onion let me try with just tcp://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552 actually can you try with tcp://0.0.0.... and i will try connect to the onion not sure whether you have to specify the onion in the config or if it's sufficient to just share with nodes who want to connect so let's test :D same: [ERROR] [P2P] Error starting listener #1: IO error: uncategorized error yeah for some reason it only works with 0.0.0.0 wdym can you paste the line you just put? also plz note the port numbers you're using a different port on your onion and on your tcp addr are you mapping those ports in your tor config? yes, I have this in torrc: HiddenServiceDir /var/lib/tor/darkirc HiddenServicePort 25551 127.0.0.1:25551 HiddenServiceDir /var/lib/tor/lilith HiddenServicePort 25552 127.0.0.1:25552 ok so you need to set the accept_addr to port 25552 on lilith try connecting now accept_addrs = ["tcp+tls://0.0.0.0:31337","tcp://0.0.0.0:25552"] I have this in lilith now and I was able to connect now was tcp+tls before and I got the errors, but now I think it looks fine ok sec 11:43:17 [INFO] [P2P] Connected seed #1 [tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552] 11:43:18 [INFO] [P2P] Disconnecting from seed #1 [tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552] I only have this error in the darkirc log now: 11:43:16 [ERROR] send_version() failed: Channel stopped but this was there before too yeah that's not necessarily an issue since it's normal to connect to seed and then insta disconnect, resulting in channel errors ic seed connection worked nice : hello from tor also semi related i pretty much know now why we were having issues with offline nodes !list No topics !topic offline host filtering Added topic: offline host filtering (by draoi) nice that's excellent news, i think this is the only blocker rn to move on https://www.pygame.org/docs/ref/time.html#pygame.time.Clock Title: pygame.time — pygame v2.6.0 documentation actually nvm that link, not relevant for ping-pong F?dT#c|P[: do you see the reply back from test bot in weechat? I don't until I restart weechat, this is a known error, will have to be debugged errorist: not all messages. i dont get all messages in darkirc. but how you know im using weechat ? im not using weechat it's what is recommended in the docs but you can use any irc client errorist ah which client are you using? I should try a different one, maybe it's a client issue on my side errorist: im having problem on all clients : test : test back test back it's normal not getting all messages because ircd is different and messages from ircd are not mirrored in darkirc but when you do test you should see the test back yes i got the test back plan is to migrate from ircd to darkirc when it's stable enough :) cool darkirc has msg retention up to 1 day nice, that's fine then so you can recv msgs when offline biab, testing some stuff which requires me to go offline therapy stopped working :/ https://agorism.dev/uploads/broken.py : @draoi pushed 1 commit to master: 874c2cc85e: net: only remove peer from hostlist after the configured quarantine limit ^ that was a bugf /s/bugf/bug nice 10:01 brawndo: can i rename SDKs ecvrf.rs to vrf.rs? Pls don't, that's too ambiguous vrf's are many ok nw Hi! Hope you doing great, I saw that you are offering positions in the project :D How is the process? Apparently is very spontaneus right? By making commits and contributing you get to a place. hi ash Hi! https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book Thanks, took a look to that part of the book, so is a yes to my question. Which dev profile are you requiring right now draoi? ffcc ash: make a couple of commits, show as you're capable and have what is takes I'm thinking about the point (17. Implement address/secretkey differentiation) as a first commit. Also, contributing to the docs if needed. At least to start ash: I just picked up that task couple of days ago oh got it lmk anything you need If there are any contribution to the docs needed, it would be a good first start. As a second step it would be cool to tackle the point 14 14. Implement cross-chain atomic swaps (XMR, ETH, anything applicable) I'm interested on doing an atomic swap with Cardano which is blockchain I come from. the* sounds pretty cool Do you have orientation about how to proceed with both? The current atomic swap is meant for cross-chain transactions? current darkfi contract* ash: no its between native assets Ok got it. So before starting on it let me know if there someone else working on point 14? ash: how about adding it as topic for toms meeting? good I believe you may be rushing toms meeting you mean monday meetings? yy rush? I agree is better to talk about it tomorrow rushing to start something of that magnitude without discussing the hows first Agree, by starting I meant making an expectation towards it and start investigating, for sure I will be asking about it tomorrow and see how good how I can help (Y) But I must confess that my curiosity drove me to be the last two hours looking info about how to implement an atomic swap xD Damn the rabbit holes ash: yeah I know exvitment can blind us some times For sure btw already mingle with an xmr/eth swap impl, so the cardano one could just be an addon there hence why I say hold your horses, you probably don't have the full picture :D Hahaha take the advice whats your background btw? if you are comfortable can you share your repos? Cardano smart contract dev Sure I'm already working on zk I wouldn't throw the "zk" term lightly around here can I send my github right here? you are well aware that most "zk projects" are not about privacy indeed if you comfortable to share it publically yeah throw it here, otherwise we can set up dms but us are focusing in privacy https://github.com/AgustinBadi Title: AgustinBadi (ash) · GitHub did you know that our current testnet consensus is based on ouroboros crypsinous? yes! There are similarities, that's why got excited about this project too we going PoW for next testnet though, we are cooking merge mining with monero To be more precise about the "zk" expression, with my team are working on a groth16 validator, and soon start porting Semaphore didn't need to clarify, already on modulo-p repos ;) Yeah I saw that update, CPU PoW is interesting, PoS always had that plutocracy that for me is like a downside well also PoS assumes you know stakers, which is contranticting to the full private paradigm hence the "based on ouroboros crypsinous" statement kayias made hella lot of assumptions in that paper that never properly state In our impl we pretty much elliminated all these assumptions Also it is true. Few people tackle that beast of papers, is interesting to hear about it critically. ash: well most people don't know that ouroboros papers are pretty much ~70% identical, so once you've read the first, all next are DLCs Which assumptions made you noise. ? But probably that's is a long road, honestly before I want to know how you addressed the atomic swap¿ The solution I saw was the Hash locking contract by commitment hashes essentialy sorry I have that theme on the tip of my tounge hahah re: ourboros assumptions, just read the protocol overview: (Y) The protocol Ouroboros-Crypsinous assumes as hybrids a network F?? N-MC , a non-interactive-zero-knowledge scheme FNIZK , a forward-secure encryption scheme FF W E NC , a global clock GCLOCK , a global random oracle GRO , a non-interactive equivocal commitment protocol, and a CRS used by the commitment protocol, to supply the commitment public key, FCRS ash: check this re atomic swaps: https://github.com/AthanorLabs/atomic-swap Title: GitHub - AthanorLabs/atomic-swap: 💫 ETH-XMR atomic swap implementation Good, I'm in the protocol.md reading about it and also the links drove me to this blog https://comit.network/blog/2020/10/06/monero-bitcoin/ Title: Monero-Bitcoin Atomic Swap | COMIT Developer Hub Going to carefully read it tomorrow, the atomic swap is my target. However is always good to take babysteps towards it, we can talk tomorrow if the are some middle quests where I can bring help and learn. salud it was suggested to implement the wif formatting as a trait - why is that? Will there be multiple key representations requiring format conversion? Will the conversion be applied to more than just private keys? In other words, what data types will be using this trait? Or is a utility fn an alternative? gm gm hihi loopr: i don't think it applies to WIF specifically, i think it's just a general way of encoding user data which is: [version:1] + [data:...] + [checksum:4] where data can be any length !list Topics: 1. offline host filtering (by draoi) !topic p2p next steps Added topic: p2p next steps (by haumea) !topic runtime hardening Added topic: runtime hardening (by haumea) !topic sparse merkle tree gadget for ZK Added topic: sparse merkle tree gadget for ZK (by haumea) !topic darkfid status and DRK update Added topic: darkfid status and DRK update (by haumea) !topic event graph status update + tooling Added topic: event graph status update + tooling (by haumea) !topic python bindings + swiss army tool Added topic: python bindings + swiss army tool (by haumea) !topic project wide unit tests and improved coverage Added topic: project wide unit tests and improved coverage (by haumea) !list Topics: 1. offline host filtering (by draoi) 2. p2p next steps (by haumea) 3. runtime hardening (by haumea) 4. sparse merkle tree gadget for ZK (by haumea) 5. darkfid status and DRK update (by haumea) 6. event graph status update + tooling (by haumea) 7. python bindings + swiss army tool (by haumea) 8. project wide unit tests and improved coverage (by haumea) gm greets ash yo Talked a lot with upgrayedd yesterday about how to contribute, one of my target is the point (14) cross-chain atomic swaps, but of course since I'm new into the project it would be cool to find more accesible commits and areas where I can contribute. hello o/ anyone from darkfi i can speak to? : @draoi pushed 1 commit to master: b126c6f71e: outbound: don't try to connect to nodes that are being migrated spore: we are all darkfi what do you want? if not dev related, please ask random stuff on #random ack * hi o/ \o \o ( ^_^)/ <0xhiro> hi everyone is the weekly dev meeting about to commence? 0xhiro: hello, yes moshi moshi aloha o/ gm yo howdy hey i was d/c, internet was being weird !start Meeting started Topics: 1. offline host filtering (by draoi) 2. p2p next steps (by haumea) 3. runtime hardening (by haumea) 4. sparse merkle tree gadget for ZK (by haumea) 5. darkfid status and DRK update (by haumea) 6. event graph status update + tooling (by haumea) 7. python bindings + swiss army tool (by haumea) 8. project wide unit tests and improved coverage (by haumea) Current topic: offline host filtering (by draoi) draoi: here? test test back hey sorry so i've been looking into issues with nodes reconnecting when they go offline we are trying to make it so you could have darkirc running on a phone and be in and out of internet and be able to reconnect seamlessly gm <0xhiro> gm! however there is a couple of issues namely, darkirc nodes essentially delete the (transport compatiable) nodes in their greylist when they are offline since the greylist process will continue, removing nodes as it goes similarly, nodes will also remove white and anchorlist connections from their hostlist when they fail to connect to them in outbound session what this means is when the node regains internet it may have difficulty reconnecting if it has deleted its hostlist it is possible to delay this using config settings quarantine limit will slow the time it takes to delete nodes from white and anchorlist draoi: is the peer discovery ask the seed again at any point? if for example we add a rule where: if peers completely empty, ask seed instead then when you come back online, after the sleep time passes, you will ask seed and grab fresh peers listo to play with while the greylist refinery timer will slow the time for greylist deletion, so we could play around with settings and see if there's a special setting we can make for these for nodes that expect to be in and out of internet you can't expect nodes to always be online, it should be able to handle nodes going offline without forever blacklisting them upgrayedd: yes outbound connect reseeds after a certain number of failed attempts rn we blacklist nodes after a configurable limit of failed connections the quarantine connect limit rn the default is 50 so i've been playing around with timing this and seeing what is acceptable here however the recent net change introduced a bug that i'm looking into rn not sure what it is yet, something blocking the stop signal CTRL C rogue detached process yeah Time to rewrite p2p? XD i think best case scenario we can come up with some settings for nodes with patchy connections that will allow you to go offline for some period of time (1hr or whatever's acceptable) what is the blacklist for? an easier logic would be: if I can't connect to any peer, pause refinery process since I assume that it might be me, not them haumea: in outbound connect, if i can't connect to peer, i try again N times, if i still can't connect, i blacklist maybe it shouldn't be doing that just greylist blacklist should be misbehaving nodes ++ ok lemme think that thru kk ty all ty !next Elapsed time: 10.9 min Current topic: p2p next steps (by haumea) is it exp back off on retry or just retry? oops apologies for clicking next not sure i understand Q gentlemanloser can u rephrase brawndo: p2p next steps: rewrite gentlemanloser: i think it's just retrying with no exp backoff XD " if i cant connect to peer, i try again n times..." is it fixed time between retry or an exponential back of in time between retry you can see the logic in src/net/session/outbound_session.rs and check the connect Slot ++ it is fixed time between retry ++ Ill read code ty then after N times, will trigger seed process well we are talking about 2 loops here good for next check src/net/hosts/store.rs quarantine() ok about next steps in net/message.rs, when receiving a new message, we get the length and preallocate a buffer this is vulnerable over the network, so would need to be fixed darkfi/src/net/message.rs:129 haume: can't the message! macro set max limit, so the allocated buffer always checks against that? then as well in src/net/channel.rs, there's a running process inside Channel which owns an Arc to the channel, which means channels *must* be stopped or they leave dangling messages upgrayedd: yeah a max limit would work fine too darkfi/src/net/channel.rs:255 so this: async fn main_receive_loop(self: Arc) -> Result<()> { haumea: yeah I reckon thats the best solution, since you abstract it for the protocol to use what it seems deemed, not a hardcoded limit enforced everywhere should be changed to use Weak instead of Arc, and never hold Arc for a long period of time so dangling Channels are not kept around although .stop() is still the recommended way to stop channels in fact net should be checked for any dangling processes or references which must not exist, so all ownership semantics should be a strict hierarchy also right now the MAGIC_BYTES are not configurable. But p2p should allow configuring them darkfi/src/net/p2p.rs:58 that way different applications have incompatible messages and there's no potential for protocol crossover/confusion (two apps using the same ports) this DEP 0001 needs to be applied: https://darkrenaissance.github.io/darkfi/dep/0001.html Title: DEP 0001: Version Message Info (accepted) - The DarkFi Book examples for "apps" here being ircd and darkfid? as well as a way to set the values used for version messages loopr: yes for example we also need a resource manager (see how it's done in libp2p) so applications can set limits. then if breached, the channel is stopped. but it will be better once darkfid is finished, then we have an actual protocol to engineer for with DoS protections ^ those are the pending works for p2p draoi: btw we should not have any blacklist imo since they don't work with tor, and that's our target wdym can you explain re: blacklist just a soft greylist how can you blacklist a tor node? we have a hashset of nodes we classify as "rejected" also people can create new IPs easily ah i get you the only blacklist i think should exist, is if a node does something bad/misbehaving ACK but then it's basically impossible to blacklist them tbh because they just create a new IP or tor addr net code is a priority to get working, then we can review and send for audit draoi: what happens if a P2P instance has no seed nodes configured? it won't run wait lemme check we need to enable swarming support in the future, but i don't want to add new features this late in the cycle but we should at least engineer it so that we can easily build this on top of the core net code yeah you need some seed node configured otherwise it's an invalid config if you have a seed node configured which is offline it's not a problem providing you have valid online peers in your hostlist so basically swarming just means the p2p instance is created, but instead of connecting to a seed node, we request addrs for this instance in a special overlay p2p net actually it would run, but won't make any connections, just running the loops and waiting for connections so the overlay would have seeds, but not this swarm aha great ty dasman i just tested now and it gave an error with no seed configured draoi: Comment the whole thing i did #seeds = [] but also outbound loop will connect to other nodes in your hostlist so it's not true it will loop forever i think we can make the seed trigger process a trait oh yeah my bad i had some garbage in the config so yes you can run w/o seed PeerDiscovery Also if the app is using event_graph it will eventually fail to sync and stop so this is configurable by the API user dasman: unless there are peers in the hostlist it can connect to (in which case event graph will sync fine) Correct a big function of the hostlist is to reduce pressure on seed nodes seed nodes should ideally only be used once for the very first time and then never again ok next? ++ : @zero pushed 1 commit to master: 18efbb9c28: book: expand contrib net task list ^ just added those tasks to the book !next Elapsed time: 22.5 min Current topic: runtime hardening (by haumea) the WASM runtime in src/runtime/ exposes host functions to the WASM contract runtime we pass data around using set_return_data(). idk if anyone is experienced with WASM, and finds anything dodgy we're doing here anyway we will need to go through this at some point and do a review might be errors lurking here /all next? !next Elapsed time: 2.3 min Current topic: sparse merkle tree gadget for ZK (by haumea) we have a sparse merkle tree impl in darkfi/src/sdk/src/crypto/smt.rs:1 Yeah just needs a review but I think we're ok but no corresponding ZK gadget <0xhiro> just curious who is writing dev documentation here? the SMT allows set exclusion proofs. right now the DAO voting could use this if this is made 0xhiro: we all are I'm gonna finish that this week the runtime review or SMT? <0xhiro> ic cos i wld like to help out as a mentee SMT wow that's really impressive dude DOOD I also kinda want to rewrite p2p lol Delete the bugs and shit nice 3rd time the charm Most of our code is like that one thing is that protocols are not explicitly stopped which makes implementing them easier. if they were it would be more correct, but i'm not sure it's the right move In what sense? upgrayedd: the whole credentials/zk thing i rewrote like a dozen times, including the zkvm. there was even a lisp version, and a version using jinja2 web macros I was sarcastic, but ok :D brawndo: so when a protocol is stopped, it cannot close down gracefully or close any dangling refs, but i don't think it's a bad thing oh yeah i guess zkvm was 3rd time lucky brawndo magic :) Bonus points to whoever can see the bug here: https://github.com/darkrenaissance/darkfi/blob/master/src/net/message.rs#L153-L158 haumea: It should be able to close them provided there is a tracker in the protocol The dangling refs are just channels? what's the bug here? overflow? since you already got magic bytes written if package is u64::max ? s,package,packet packet.payload len The bug is that the VarInt takes the string length, but we write the bytes of the string So if the command is non-ascii, there's a length mismatch I think it just works, but it's implied not explicit behaviour lol, emoji commands are tight :D yeah lol nice next? !next Elapsed time: 11.2 min Current topic: darkfid status and DRK update (by haumea) ok so drk needs to be updated for darkfid, which should be getting close to ready soon darkfid status: the contracts in src/contract/* have unit tests should how the contracts are used client wise then we have to migrate these to the new darkfid since likely the old drk tool is likely broken also the CLI itself can be improved and given a bit more thought/care, maybe the way git is stateful too consensus seem to work good, tested using 5 miners, all reached consensus on what to finalize, sorting out some coms bugs and then the new sync logic will be implemented sweet after that I will focus on drk money stuff, to update and test them can you test 5 miners on a single machine_ cool loopr: of course you can I mean resource wise How else are you gonna heat up your house? contrib/localnet/darkfi-five-nodes XD iirc it sets up 2 threads on each miner so you should be good with a 10 threads machine ++ brawndo: lol back in my gpu mining days, that was my primary heating source anyway, haumea, anything to ask/add re: darkfid? should i start migrating drk or wait? (or if others here want a task to work on) well I told you before, you can start migrating using contrib/localnet/darkfi-single-node I have added the config steps you need to take ty tldr: miners rewards address must be the one you generated in drk so you receive the miners rewards to spend gotcha lmk if you need any help to set it up will do !next Elapsed time: 6.2 min Current topic: event graph status update + tooling (by haumea) I have already ducktape fixed scan great so it should work(TM) dasman: any news? currently I'm able to send the graph state to tui, and now working on the events update, like dnetev! ok i'm not sure you even need a tui tbh brawndo: btw I've also added the changing default address functionality in DRK Well it was inspired by bnet By inspired I mean copy paste :D Cool <3 but it's cool to go through the list and see what's happening, like unrefrenced tips getting updated etc.. aha great tui :eyes you send it using RPC, but could you maybe just read it from disk directly? Not possible, since the db is locked to whatever app is running haumea: if sled is used to store the event-graph no you can't ok since you can't have more than one readers Yup #safety_first yeah but the event graph could write a separate data log and that is being piped in why not rpc? well if you are savvy enough, you can tail -f | awk the log file and generate the "recors" like that that's quite fragile dasman: well if you prefer rpc then it's fine too, since it's a networked component anyway ok !next Elapsed time: 6.9 min Current topic: python bindings + swiss army tool (by haumea) ty also another place for contributors to look into is the python bindings in src/sdk/python/, which is used a test by the ZK debugger in bin/zkrunner/ but ideally would have more bindings for working with data that is used in the `drk` tool for inspiration see: https://github.com/libbitcoin/libbitcoin-explorer/wiki/How-to-Spend-Bitcoin Title: How to Spend Bitcoin · libbitcoin/libbitcoin-explorer Wiki · GitHub !next Elapsed time: 2.3 min Current topic: project wide unit tests and improved coverage (by haumea) also see `make test` and `make coverage` !next Elapsed time: 0.4 min No further topics any questions/topics? otherwise we close now ty frens ty all Me for sure not a prio rn, but I am curious if other bindings are interesting - e.g. golang i'm gna rm the quarantine stuff and think about offline greylist filtering Thanks everyone what you guys think of idena? If the topics are all cleared, wanted to know where I can help with. an anon version of worldcoin? dune: take this to #random or #markets ah k ash: the last couple topics were about where contributors can help out we are not talking shitcoins here lol loopr: could be done with uniff maybe you mean like a uni project? ash: ^ read the meeting transcript loopr: google yeah florida (correction: duckduck ;) ) https://github.com/mozilla/uniffi-rs Title: GitHub - mozilla/uniffi-rs: a multi-language bindings generator for rust or ash: see https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book The last points where you are working haumea aa - haaaa gotcha ash: unit tests (how I started, and how I should be going as well) Good another Q: I am now running an ircd on a pi at home What is the point "28.Tutorial creating a ZK credentials scheme." is about? But if I shutdown my laptop, and connect later to it, it doesn't load missed comments That only works via the weechat relay http://agorism.dev/log you guys running ircd all the time, i.e. never shutting down your boxes? pretty much and saying afk when not online ok cool loopr: you can also run weechat on the pi #darkircfixesthis and either ssh to it or turn weechat into a server loopr: yeah I do run it all the time, darkorc will have history yeah that's what I am doing on the phone I guess will configure my laptop to do same weechat-android gud are we done? yy Point 28 is free? i think so ash Ok, so I'm going to take a look about that. If someone have more context about the point 28 or does know what parts of the docs I can help please let me know. Ty i'm not sure that's not the easiest task i mean it's not an easy task unless you've made a contract in darkfi before : @skoupidi pushed 1 commit to master: 59580b55b2: .github/workflows: update apt cache before installing deps : @skoupidi pushed 1 commit to master: 490615a94b: .github/workflows: update apt cache before installing deps : @skoupidi pushed 1 commit to master: 57bb4f8477: darkfid: minor communications fixes haumea: If you an easier task I can help with please let me know. If you know* haumea: However learn to do an darkfi contract would be cool af, as long as doesn't require too math background for me it is a good challenge Still I'll wait until I carefully read the book and look tasks in the source code. I'll come back in a couple days <|"p@~uh}6$]==Dub> test test back <|"p@~uh}6$]==Dub> srry gm gm gm gm gm : @draoi pushed 1 commit to master: de78b48549: net: downgrade host (don't remove or blacklist) if we can't connect ^ this commit fixes the bug mentioned yday which was blocking CTRL C haumea: currently there are two errors possible in the version handshake, ChannelTimeout and ChannelStopped ChannelTimeout can happen when we are trying to connect to an offline node ChannelStopped can happen if a node goes offline when we are trying to connect to it (or more precisely, goes offline when we are trying to do a version handshake with it) nice, we can also create separate light and dark grey lists for reliability in the future however discriminating too much between nodes based on reliability makes the network vulnerable to sybil : @parazyd pushed 1 commit to master: d9b8bcf84a: net/hosts: Drop q lock when possible. from the perspective of security, having no refinery is the most secure since all nodes are randomly selected, but then it's less reliable Since you're working with a lot of locks, you should remember to drop them whenever might be needed i forgot the other error which is Version mismatch error ++ ty brawndo That would happen here: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/net/protocol/protocol_version.rs#L142-L153 Title: darkfi/src/net/protocol/protocol_version.rs at master - darkrenaissance/darkfi - Codeberg.org correct Although there's a logic bug? It's using OR So it'd fail in either case :D Even if minor ver is different That should be fixed ++ quick Q tho, is ChannelStopped an bad enough offense that would make us want to forget or blacklist a node? also draoi, did you see the bug yesterday bra*ndo caught, the one me and upgrayed couldn't see? No, it might be a connection interrupt right You'd blacklist if they misbehave make a Channel::ban() method which blacklists then calls .stop() OK rn we have a hashset called rejected() but i can rework then the protocols can call it if sth goes wrong : @draoi pushed 1 commit to master: 89d7a48f8c: net: fix logic bug in version exchange Nice "tcp+tls://dasman.xyz:5262", "tcp+tls://ohsnap.oops.wtf:31337" are these still the seed nodes? i cannot connect to anyone on darkirc ok seeding works from dasman. i guess there's no nodes online? 10:18:12 [ERROR] event_graph::dag_sync(): [EVENTGRAPH] Sync: Could not find any DAG tips 10:18:12 [ERROR] darkirc: Failed syncing DAG (DAG sync failed), retrying in 10s... i keep getting that my local node is on and off (testing) did you just rebuild latest master? yep well there's issues with lilith being a different version so maybe that's why ok i just switched on my node youcan probs connect to me should I update my darkirc to the latest master? if you want errorist tho there will be further changes ok dnet dnet doesnt work for me either https://agorism.dev/uploads/dnet-err.txt oh nvm i reinstalled requirements.txt works now https://agorism.dev/uploads/screenshot-1709029405.png yeah so my node shows no connections : @parazyd pushed 1 commit to master: 492974eaab: zk/gadget: Implement is_equal and assert_equal chips. i suspect you have no hosts and your hostlist is empty https://agorism.dev/uploads/darkirc.log try this seed? seeds = ["tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552"] hostlist is indeed empty i'm connected and have a v healthy log with lots of nodes, but i've been running it a lot and have a hostlist why isnt the seed node sending me hosts 10:24:37 [WARN] net::session::seedsync_session: [P2P] Greylist empty after seeding ok! im connected : test : test back test back nice haumea: was that from connecting with the tor node? yep why dasman and oops node didn't send a hostlist is a mystery that should be solved ideally if the seed operators can run dnet and inspect their hostlists : @zero pushed 1 commit to master: 86dd32b207: fix serialization bugs in net/message.rs oh tnx i was about to do that np i just saw the code is redundant : @zero pushed 1 commit to master: 2c0a3990ed: delete redundant debug messages from message.rs : @draoi pushed 4 commits to master: 75d6dbed7a: lilith: upgrade to v0.4.2 : @draoi pushed 4 commits to master: 3d020df820: Revert "lilith: upgrade to v0.4.2"... : @draoi pushed 4 commits to master: b1ca14688c: lilith: upgrade to v0.4.2 : @draoi pushed 4 commits to master: e6e64911b4: update Cargo.lock haumea: What do you think about using https://github.com/near/borsh-rs instead of our serialisation lib? Title: GitHub - near/borsh-rs: Rust implementation of Binary Object Representation Serializer for Hashing no varints upgrayedd: i've been running darkfid for an hour, but still unspent balance is 0 in drk but it's mining blocks successfully haumea: have you generated wallet and updated the config as per readme instructions? yes sir and i restarted twice, deleted dirs .etc darkfid.toml says: recipient = "DnGrMAEXQV8E63nMnafHKrUpuzCEboJUCeyBFN6QwYR5" $ drk -c drk.toml wallet --address DnGrMAEXQV8E63nMnafHKrUpuzCEboJUCeyBFN6QwYR5 did you scan? ah no i'm not scanning, ty lol old stuff still applies scanning, subscribing, etc etc ok it works now ty always has been XD fun timez what do i do with all this money. i have $90 already Do you really need varints? they are used to store the length of Vec or String. Most of the time, the length of these items is a single u8, but if you use a full u32, then that's 3 bytes extra per variable item... which seems kinda wasteful, but maybe it's not important... just don't like to waste network traffic for no reason. and i thought we want to reduce dependence on external crates. well now we lost a feature in the process (regression) for example imagine Vec with 20 items (like is used for addr packets). That's 20*3 + 3 wasted bytes their serialization seems to be for hashing rather than network traffic so it doesn't support async either (as it says, they prioritize speed, whereas i think network serialization should prioritize efficiency) : @draoi pushed 2 commits to master: 71b1e8ddab: channel: create ban() method... : @draoi pushed 2 commits to master: 3c06e215a2: net: rename rejected to blacklist... fyi we are only banning in one place rn- when notify() triggers Error::MissingDispatcher nice, it will get used more for sure : @zero pushed 1 commit to master: c275c5c08c: darkfid-single-node README: add info about scan does anyone own the agorism.dev site here? There's a number of resources linked to it in the darkfi book, but they're no longer on the site. I've been going through finding alternatives and plan to make a PR some time this week because there's a fair few broken links bbl deki: hey what's up? you mean the books or what? adiyah: https://github.com/narodnik/script/blob/master/ytwatch.py haumea: yes the books, for example manifesto for democratic civ isn't on the site, but referenced here: https://darkrenaissance.github.io/darkfi/dev/learn.html Title: Learn - The DarkFi Book I found another copy though ah yeah sry i broke it yesterday one sec, fixing it okay, do you want me to point out other books linked to the site that aren't there? There's also the Python one in the same learn section, but there's a newer edition from last year no, it should be fixed in 10 mins also abstract algebra by pinter check now https://agorism.dev/book/ Title: Index of /book/ also we should use #random for this in the future yeah they're showing now, okay I have others not related to agorism.dev so might do a PR some time this week, need to go to bed soon nice gn deki : @parazyd pushed 1 commit to master: caac0f9bec: zk/gadget: WIP sparse Merkle tree gadget implementation... \o/ : @parazyd pushed 1 commit to master: 29a783f49e: zk/gadget: Finish SMT gadget haumea: Want to attempt adding it to the VM to familiarize yourself with it? yessir These two sets of columns you have to additionally create, or find some to reuse: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zk/gadget/smt.rs#L76-L77 Title: darkfi/src/zk/gadget/smt.rs at master - darkrenaissance/darkfi - Codeberg.org (I still don't know fully how safe reusing columns is) And for Poseidon you also should reuse the existing config: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zk/gadget/smt.rs#L78 Title: darkfi/src/zk/gadget/smt.rs at master - darkrenaissance/darkfi - Codeberg.org The chips create their own selectors Also note it requires a column for each height level in the tree So there will be a tradeoff between proof size and the tree width But I think in a sparse tree, 3 levels is sensible, no? i was thinking more like 32 same as the normal merkle tree for coins then adding all money nullifiers, so when we clone the money tree, we can also clone the nullifier tree The coins tree is not normal It's a tree that keeps the minimum needed A sparse tree just grows ah yes i forgot right the height is for efficiency I dunno, 32 columns sounds scary ok then it sounds good, we can test later reusing columns should be safe unless halo2 doesn't protect against that i guess it limits the "slotting" together of chips The layouter/regions do some black magic the gates use columns but they're distinct from them a gate uses some number of distinct columns Those are the columns you create during configuration You'll create these beforehand: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zk/gadget/is_equal.rs#L62 Title: darkfi/src/zk/gadget/is_equal.rs at master - darkrenaissance/darkfi - Codeberg.org yep just saying the layouter (if done properly which i assume it is), should be smart enough not to reuse the same column on the same row And then you see https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zk/gadget/is_equal.rs#L66-L69 Title: darkfi/src/zk/gadget/is_equal.rs at master - darkrenaissance/darkfi - Codeberg.org Yes likely correct nice will eat now, and begin on this tmrw also will start updating drk, and want to allow configuring generators in zkas does someone have to add me to the repo on codeberg or is it public? can't find it ok, enjoy I'll be afk tomorrow mostly loopr: Can't open the above links I was pasting? brawndo; I actually can After my account was activated I was presented with a repo search field, and entering darkfi wasn't showing anything ah dunno what's going on there gn gn Hi devs, I'd like to sync my effort better (as my recent docker/arm related PRs were closed) and help the darkfi project busy atm, but may be kicked out soon, you know, contracting ... I'd like to use AI to help with code quality check, improvements, unittests, ... what do you think ? root: they have the current tasks here https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html although you need to know Rust Title: Contribute - The DarkFi Book gm gm root: lol at AI and code quality in same sentence... please don't do that... hi : @draoi pushed 1 commit to master: 532168fea8: doc: small fixes on learn.md gm root: i think that you need to stop using flashy gimmicks, just sit down, open notepad.exe and write code brb b : @draoi pushed 1 commit to master: ecf81d9f93: net: make a generic base class for PeerDiscovery... haumea: super basic rn i will add documentation later, gtg afk for a bit gm gm greets test test back gn(u) gn, gm here : @dasman pushed 4 commits to master: 532d67e972: src/eventgraph: support debugging msgs : @dasman pushed 4 commits to master: bc1f903d57: bin/deg: introduce eventgraph debugging tool DEG : @dasman pushed 4 commits to master: 8f281cf354: bin/deg: .gitignore : @dasman pushed 4 commits to master: 47a319fd00: bin/genev: add deg task and related rpc calls : @dasman pushed 1 commit to master: 103e7b64f8: bin/genev: add dnet task & rpc calls one can test event graph tool with bin/genev, which is a very simple app that was originally made to get into event graph stuff check /script also draoi I'd really appreciate if you take a quick skim over deg :) * oops, bin/genev/script are slot checkpoints like blocks? which one are we at? moco: I don't think they're the same, and I don't think we're at any specific point because there hasn't been a mainnet release yet but best to wait for one of the devs to answer Is chat history preserved when messages are sent to the channel and you are not online or can you only read messages received while online? гм gm gmog gm gm moco: only when you're online for ircd, but darkirc preserves msgs for 1 day afk dasman: around? https://0x0.st/HRxr.pdf : @skoupidi pushed 1 commit to master: 9643b5af87: rpc/from_impl: proper event-graph feature usage brawndo: fyi working on a net change that will simplify a lot of the locks etc happy to pass it over to you after Well let's hope that resolves bugs :) upgrayedd: hey, what's up? dasman: check 9643b5af87 you shouldn't push stuff without checking building pashes especially when features are involved ah sorry about that ty : @skoupidi pushed 2 commits to master: d920493173: rpc/client: new RpcChadClient added : @skoupidi pushed 2 commits to master: dc6f4ecd6c: darkfid: replaced RpcClient with RpcChadClient : @skoupidi pushed 1 commit to master: d7495510dd: darkfid: better locks handling and minor code improvements afk draoi: I just experienced a death loop, running genev tmux script, I just closed node_d and the a, b and c nodes are now connecting to it endlessly 16:10:03 [INFO] [P2P] Connecting outbound slot #3 [tcp://127.0.0.1:28885] I understand you're already working on peer discovery, but thought I'd report it anyway anyway, going afk for a bit yeah master is pretty much broken rn, sorry about that it's stable at 661935018707b11031df10330f02a282f432a76f tho not for nodes that want to go offline for long periods and reconnect seamlessly should have a fix by 2m going afk for a bit now also back, hey Seeing tests failing on `master`, is that smth on my end or a known issue rn? loopr: did you pull latest commits? yep which tests are failling? but from gh haven't set up codeberg upstream yet sounds like it's on my end then error[E0433]: failed to resolve: use of undeclared crate or module `darkfi_sdk` --> tests/vdf_eval.rs:26:5 is the 1st one of others yeah that doesn't sound right just ran `cargo test` on master use make, its there for a reason yeah that fails as well cp -f ../../../target/wasm32-unknown-unknown/release/darkfi_deployooor_contract.wasm darkfi_deployooor_contract.wasm aah hang on wasm-strip missing installing `wabt` brought me step further, but still failing (had run sh contrib/dependency_setup.sh before but seems that didn't install it) ah yeah if it's missing for your distro, pls add it In that script ok - using manjaro Cool now getting error[E0658]: use of unstable library feature 'stdsimd' https://github.com/darkrenaissance/darkfi?tab=readme-ov-file#living-on-the-cutting-edge Title: GitHub - darkrenaissance/darkfi: Anonymous. Uncensored. Sovereign. brawndo: iirc manjaro uses pacman like arch, so deps should be there yeah that's what I was looking at yep pacman Yeah What's the package name? I can add it now have fun supporting trash distros :D no worries I know my way around claiming manjaro is trash is a bit of a mouthful tho :) at least stuff doesn't break from one day to the other https://github.com/rust-lang/rust/issues/48556 Title: Tracking issue for stable SIMD in Rust · Issue #48556 · rust-lang/rust · GitHub not sure if that's at the root of things but checkin https://github.com/Kixiron/size-of/issues/4 Title: Nightly feature stdsimd fails due to removal of the unstable stdsimd feature · Issue #4 · Kixiron/size-of · GitHub loopr: the underlining issue is that deep deps must update to stable simd, and then their parents update their version, etc. etc. until you reach top of dependency tree manjaro is trash its not a bit of mouthufl, acthually it comes really natural after you get used to saying it :D this doesn't break until they forget to update some random certificate :D okey doke what do you run? my daily drivers are gentoo and void the gud stuff ufff well have fun, dunno void tho its good :D although I mainly suggest devuan these days, since debian based and >90% of guides use apt so you will always find quickly what you look for kk but me no like debian unless I have to run a server or smth alpine for servers :D ok tried make on an independent ubuntu (gosh) box and got nearly the same on master error[E0635]: unknown feature `stdsimd` 33 | #![cfg_attr(feature = "stdsimd", feature(stdsimd))] loopr: of course, use the #living-on-the-cutting-edge stuff okey doke, just was wanting confirmation that it's not my trashy manjaro ;) no thats rust new version and the deep deps mentioned earlier welcome to nightly life nigthtly_life.mp3 thanks upgrayedd loopr: for what? support, works on master now oh noice glhf hacking ++ my (now closed) PR might help too :: https://github.com/darkrenaissance/darkfi/pull/253#issuecomment-1953943553 Title: stdsimd-removed-use-nightly-202402 in rust-toolchain.toml by spital · Pull Request #253 · darkrenaissance/darkfi · GitHub re:: upgrayedd | root: lol at AI and code quality in same sentence...:: some years ago my friends doing translation were laughing to CAT too. They are not anymocomputers+translation too. they have no job today. I even dare to say I can smell your fear ;) no offence when stating something either stand by it or don't say it at all, so cut the "no offence" part My problem is not with current state of AI per se, but with what people market it to be and no, I will never be afraid by something that I know the inners workings of why would I? its like saying a mechanic is afraid of a car engine so since current state is literally garbage, I will obviously laugh at such sentences Happy to see the technology grow and become better, but right now, its laughable ACTION drops mic re::haumea | root: i think that you need to stop using flashy gimmicks, just sit down, open notepad.exe and write code:: long time ago I was able to write hex codes for Z80 and some for x51 too. why bother today. I can prepare example of what I mean. I did rustlings in notepad. But remember asking chatGPT harder questions and was thrilled. I wrote codility challenges in "notepad" too. but be warned, it usually takes hours to solve it right for easier tasks, and AI gave (15 months ago) the result in seconds.and often correctly on first try. What it can do today ? I believe even with my single GPU it can do a lot ;) but if there is no interest, I accept re :: upgrayedd :: I did not want to argue or attack, hence "no offence". but again, if there is no interest in my help, I accept root: since when discussing your opintion is argueing or attacking? we ain't snowflakes here, feel safe to express yourself well .. do you know what opinion is like ? We obviously are interested in your help, hell you don't even know how many times I praised you and got thrilled when saw a new PR by you the problem is, that you have to understand, we don't want to pollute or bloat our codebase with garbage especially if you don't understand why that is. root: obviously, the word was specifically chosen for that reason, its called wordplay :D (re "opinion") thank you. I am glad I was able to help a bit And I love challenges (as with Codility mentioned above) so let's try one here: You give me an easy task (programming or whatever), I give you (mostly) AI generated answer. I hope you stand to your words too, so as no snowflake you are not afraid if the result makes you stop laughing at AI. to me just the word AI itself is laughable - there is no such thing as AI, it's just code doing its stuff. But that's adifferent story code creating code :D then why you use it? it should be better to use language prediction models, since thats what you are mainly using ask it to describe it like I'm 5 what a gravitational lense is and how it affects the redshift of preceived light I still don't see your argument, even if the response is correct, why would that make me stop laughing, since I know how it "thought" of it its nothing "new" or "groundbreaking", so coming back to the engine example, producing a "faster" engine, doesn't change the fact of how it works oh apologies root, i didn't realize you were so old school "even if the response is correct" - I do not get it - do you want your tasks solved or not ? Do you stand to the challenge or not ? we get a lot of beginners here who think using vscode and AI will do programming for them anyway off to sleep, cya tmrw, gn no need to apologize. gn haumea root: The point is not "solving" the task, is understanding and/or learning how it solved growth is name of the game, not finishing something just for the sake of it at least thats my mentality I told you ask, so yeah I'm up for the challenge, so shoot! ok. I needed another wine :D I agree about learning - but why learn something "lower level" - that "in theory" - can be described by: define task,ask genAI/agents to solve it, tests the results, (repeat), review, test, send PR for review ? so shoot what ? I can shoot only like average in real life, was better in games shoot as in I've already mentioned the "challenge": ask it to describe it like I'm 5 what a gravitational lense is and how it affects the redshift of preceived light root: why learn something "lower level": curriosity is the simplest answer, the most proper one is to become more efficient in understanding how to do the rest parts of the flow you described what do I test against if I don't know what the hell I'm testing? or how do I review something if I don't know how it's supposed to work? taking "correctness" of said agents for granted is a foolish mistake imo additionally, ofloading that to someone else is even worse (send PR of review) since you will not be able to understand the review, and simply pass it back to the agent to "solve" your supposed mistakes then the question is, why act as the middleman in the first place? if repo owner wanted, they could easily employ code bots to do pretty much everything but is the technology there? you as an enthusiast and defender, feel that you would trust that generated code competelly? again talking at its current state, when it becomes better we can come back to this discussion and laugh s,competelly,completely I saw youtube videos about quantum computers - maybe it was not AI - explain for 6yo, high school, university, field specialist, but that was not my point and it is not a task I would like to prove AI use. I remember asking about parsec and au and other questions and even 7B network gave me fine answers (I am not 5 yo, so I did not ask for that) the development is an iterative process, sometimes you just "do not know" ... The reasonning behind the 5yo is very simple: the best way to see if you understand something, is to try explaining to your mother(or a 5y old) and they get it so by making that prompt, you will see that it doesn't simply spit the first wiki result (which I would have done faster) so you test the I in AI that way or - if you do something and you are not able to explain to 5yo - stop doing it :D exactly! sometimes you just "do not know" care to elaborate more? because I don't like what I percieved from that statement in the sense that, do you stop and take for granted, or you mean you learn until you know? the keyword is "iterative process" - me or anybody in the team as long as it goes forward aha ok now I got food for thought: can we trust "ai" generated code? Is "ai" code always good because it was made by "ai"? what when we start trusting it and then all sorts of hacked ai starts to appear? good luck debugging code nobody wrote. As with everything, it's not just good or bad, just a whole new rabbit hole of new possibilities and new problems still someone has to know loopr: exactly, hence why in my original argument, I specifically stated that I'm against how current state is marketed well "someone" like a "bus factor" of 1 ? lol did you check the ceo of nvidia speach couple of days ago? everyone will be a developer because of AI gl debugging then :D he was always great in advertisement, but I saw potential many years ago. unfortunately sold all my stocks before AI craze why tho? the gaming dominance should have at least tell you it will stay steady but thats a discussion for #markets XD anyway back to original convo as you said, advertisement I had worst money decisions :D yep hence why back in the first argument, I explicitly said I don't like how they market the current state ^ "he was always great in advertisement" yy just saying, marketing is a hype trap gravitational lens and redshift : wow if I had that when I was 5 :D https://0x0.st/HR0Z.txt root: for a language prediction model, thats decent, but still not impressive in my book :D last time I used AI/LLM to help with writing code, it took me down a rabbit hole where it totally misled me, got functions in the plugin wrong (which was an open source plugin), and I wasted more time than if I had not used it this was for a C++ task at work last year, and that's probably the last time I used it for 'writing code' lol you may call it lang prediction or "stochastic parrots" or any you like, but that does not change the present state is already quite good and it should make you stop laughing at AI (and rather use it). try to imagine, what will come (as my friends in translating business were not able to) re- deki - I am not saying it is (nether was) perfect - but again - who is ? imagine 64% good - like for MMLU - that even 7B model can be... but let's get back to darkfi. I had strange error recently. I had to increase to SHM=256m for docker build on x64 (only for debian). but I am not able build (e.g. fedora) docker on ARM even if SHM=4096m ... seems strange... any ideas ? I'm not sure about that I'll check later, this output tail is with less (than 4096m) SHM https://0x0.st/HR0b.txt ;wine bottle finished, gn that doesn't look like a build problem though, rather a test failing, and possibly a bug? should be fixed after latest commits ++ codeberg down https://imgur.com/Ab5HW5P Title: Imgur: The magic of the Internet gm hihi gm no gm gm wow. not 4G, not 8G, but rather 12G shared memory required for aarch64 docker build... Successfully tagged darkfi:aarch64_fedora_8f281cf35_2024-02-29shm12288m; 163m13.471s gm gm waving I'll think about a bot sending gms randomly so as to not doxx location lol hello had brief outage but am back now errorist: here? fyi https://github.com/darkrenaissance/darkfi/pull/256 Title: Fix dev link to dev section by holisticode · Pull Request #256 · darkrenaissance/darkfi · GitHub : @skoupidi pushed 2 commits to master: ed8e485575: validator: changed consensus finalization logic : @skoupidi pushed 2 commits to master: 632f07a322: contrib/localnet/darkfid*: configuration changes `trait Formatter { fn do(&self)}`, then implement: `impl Formatter for dyn fmt::Display` - can I not use the fn `do` on an object which implements Display? well that's what rustc tells me, so how can I do that? Can provide some pastebin if this is not enough loopr: you impl Formatter for a generic, which implements the Display trait check src/sdk/src/dark_tree.rs for example its a struct thouh, but I reckon you can do something similar for a trait like impl for Formatter {} Okey doke will try in a bit Thanks igh_imma_head_out.jpg glhf hf2 : @dasman pushed 1 commit to master: 6d39951185: bin/deg: updating unreferenced_tips on receiving new events : @dasman pushed 1 commit to master: b9e74e52dc: bin/darkirc: add deg task & respective rpc calls : @dasman pushed 1 commit to master: 784438823d: bin/deg: add guard to only show infos on live nodes : @dasman pushed 1 commit to master: 50e57cba54: bin/darkirc: add forgotten deg_task.stop() : @dasman pushed 1 commit to master: 0043665c46: bin/tau: add deg task & respective rpc calls upgrayedd: lets cont here the problem with MAX_VALUE - hash is that it can easily overflow quickly not if you use a 64 bit value ;) shit 64 BYTE lol ahahah wait we already know the number is derived from 32 Bytes so we can use that max bitcoin has this weird af calc called "bits" for this float aka max from 32 BYTES BigUint can handle that without overflowing oh yeah lol true yeah it's 32 bytes duh :D yeah so MAX_VALUE_OF_32_BYTES - hash should do the trick and we don't have to use divition anywhere That hash is the RandomX output hash? brawndo: yeah Yeah then it's good The more work, the higher the result since we want the definition of: lower hash number meaning block was harder to produce exactly! (also perfect ordering and work of a subsequence is less than work of the sequence) ++ so haumea: will you write a "spec" with formalities? and then I will impl everything yep lets fucking go!!! ;)) LFG if we have the proof for the sequence sum we are golden af https://bitcoin.stackexchange.com/questions/2924/how-to-calculate-new-bits-value Title: protocol - How to calculate new "bits" value? - Bitcoin Stack Exchange this is how bitcoin does it since that sequence length will also be our security threshold it's really weird, they calculate something called 'bits' will check later https://bitcoin.stackexchange.com/questions/30467/what-are-the-equations-to-convert-between-bits-and-difficulty Title: blockchain - What are the equations to convert between bits and difficulty? - Bitcoin Stack Exchange btw randomx hash max: BigUint::from_bytes_be(&[0xFF; 32]) just look at the function SetCompact same is used for mine target and wtf yourself where the next mine target is max/next_difficulty haumea: better check our randomx stuff bitcoin can pollute your mind XD src/validator/pow.rs has pretty much everything you need ok ty, although i'm also working on integrating the new smt chip in zkvm but will do these tasks tbh I think the consensus is ultra high priority right now ok will prioritize that this weekend <3 you can start with the functions specing Yeah if you can get that done then you can get ppl to review it (again) aka how each rank is calculated so I can start implementing them in parallel and then you do the proofing that should be quick so we then work in parallel with each other y-y-yessir hai! yosh! usss! hai hai https://github.com/monero-project/monero/pull/9184 Title: Remove instructions for Void Linux, add NixOS by sausagenoods · Pull Request #9184 · monero-project/monero · GitHub https://github.com/void-linux/void-packages/pull/44422 Title: srcpkgs/*: remove all cryptocurrency/blockchain packages by 0x5c · Pull Request #44422 · void-linux/void-packages · GitHub > @void-linux void-linux locked as resolved and limited conversation to collaborators Sep 4, 2023 i guess we need to stop recommending void to people wtf why do they hate on crypto :/ xenoestrogens in the water lol I wanted to give qubesos a try, have you tried it? sounds awkward, i want a lean minimal linux ACTION sigh : maybe time to become a gentooman :D https://github.com/void-linux/void-packages/pull/44778 Title: Partially revert "srcpkgs/*: remove all cryptocurrency/blockchain packages" by lemmi · Pull Request #44778 · void-linux/void-packages · GitHub https://github.com/void-linux/void-packages/discussions/46087 Title: Better communication · void-linux/void-packages · Discussion #46087 · GitHub the void devs moved it to 'discussion' to avoid having to deal with it haumea: thats old news XD haumea: btw did you check how the branch name was called? removed-cryptoshit :D devuan should be the go-to recommendation gm ser motherfuckers since i posted in that discussion yesterday, it was locked by the devs gm brb b gm gm errorist: are these the updated steps to connect to a tor node? https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html or do I need a seed from you? Title: tor-darkirc - The DarkFi Book hey deki: this is if you want to run your own tor node, but if you want to just connect, you can add seed tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25552 make sure you have allowed_transports = ["tor", "tor+tls"] yeah I think just to connect at first, like what you helped Alice with earlier this week haumea, draoi: what if he just leaves the seeds empty, my lilith tor onion should get announced to him automatically, right? maybe you can test this deki sure, so leave the seed field empty? yes, try please, I wonder what happens :D errorist: seed empty means no lilith so unless he/she/they/them have a manual peer we will connect to noone oh I see, so in the default config there always needs to be atleast 1 seed? to discover the others well if you know a peer directly you can connect there if you know noone, then yes you need a seed you don't need a seed if you have a hostlist upgrayedd: but I don't see peers = [] in the darkirc config, you mean I could set peer=[] in there if I know one? I have peers = [] in lilith errorist: yes nice maybe we should update the docs, make it more clear for new people what a seed is and what a peer is, was confusing to me at first I can help with the doc, but I never used github or pushed changes so maybe you can help with that :D preferably, use codeberg watch a git tutorial on youtube or you can just send the .patch like the oldschool days nah it's good to get on git, esp if contributing docs longterm git != github wdym? github is git therefore git is github therefore git == github github is a front end for colaborating over git its like saying firefox == http or weechat == irc lol was jk ;) lol ACTION god oops wrong command creates dinosaur ACTION destroys dinosaur ACTION creates git ACTION creates man ACTION creates github ACTION centralizes git ACTION destroys man :D haumea: ok, i'll watch some git tutorials hehe this is a good git tutorial site btw https://learngitbranching.js.org/ Title: Learn Git Branching strong men create git times, git times create github, github create soy men lol indeed : ey errorist, so I left my seeds field empty, and ran ./darkirc and it works :> : is that meant to happen : deki: what does your darkirc log say? which nodes did it connect to? : weird your message isn't coming through on weechat, but I get it on the telegram bridge channel. Do you mean this: Connecting outbound slot #2 [tcp+tls://dasman.xyz:26661] deki: check your weechat is connected to darkirc daemon go to first buffer, you should see the disconnect message and the reconnect retries : upgrayedd: not sure I understand, first buffer in the streaming messages after running ./darkirc? deki: first buffer in weechat like how you move channels : ah right, yeah I had a look there but not disconnect message just this: irc: connecting to server localhost/6667... : irc: connected to localhost/6667 (127.0.0.1) : then welcome to darkirc network : I can just add the seed and reconnect, didn't have this issue last time I tried this : I mean, last time I didn't leave the seed empty are you using both ircd and darkirc on port 667? 6667 : I've disconnected ircd, because it was having issues with addr or port in use : but looking at the first buffer again, the previous connections to irc were on 6667 : I will brb, supermarket is about to close darkirc on master is broken btw about to push a fix : @draoi pushed 2 commits to master: 7ec08e76b4: p2p: add is_pending() method... : @draoi pushed 2 commits to master: db92a9e3dc: net: introduce HostState state machine to safely manage host activity... haumea ^ : @zero pushed 1 commit to master: fcb1ca1242: book/consensus: add formalisms for block rank calcs upgrayedd ^ haumea: codomain, you miss the -? yes exactly, it cannot be < 0 ∑h(bi​)≤T(bi​) that seems wrong ah yeah it's missing a sigma :D nice ok so the ranking is defined as: block rank: distance from max 32byte int fork rank: sum of its block ranks : @zero pushed 1 commit to master: c443ec962a: consensus: small edits yeah that could be one way since then smaller target -> higher rank for the last part, we need to add the sequence length we were discussing or its safe to assume that it holds true for any size of sequence? see the section called additivity it should hold true for any subsequence that work(subsequence) < work(sequence) I'm not talking subsequences I'm talking two distinct sequences how do you prove that W(Sa) != W(Sb) at a certain length see the section called hash function oh I see you pretty much disprove it, but proving the opossite ok I see it so effectively, our "forks" would only be max length 2 I'm wrong? wdym max length 2? they can be any length (assume syncronicity and mined block reaches everyone instantly) ok let me make it clear: lets say you want H(a1) + H(a2) + H(a3) = H(b1) + H(b2) we have canonical chain and 2 miners extending it you have to construct it using a sequence of operations minerA mines blockA, minerB mines blockB at the same time so we have 2 forks of length one, extending canonical since they will both have to mine the next best fork, lets say its blockA they will each mine blockAA and blockBA, making the fork with blockB obselete so the highest you can get is length 2, as any consecutive block extends you so you can move the subsequence to the right not necessarily, maybe you mine blockBC and it has a block hash of 0 then suddenly B -> BC is the winner wait thats the syncronicity problem tho since all honest miners always try to extend the current highest ranking block aha yeah, so if that's not allowed then you're correct we mitigate the syncronicity by having the fork buffer thats why I was asking whats the safe sequence length number should be >2 to mitigite race conditions in the last block but something like 4-5 should be more than enough to be ultra safe but we have to also formally prove that oh that's a harder question, depends on mining power and network size .etc exactly! thats the second "part" of having a formally proven consensus synchronicity and byzantine tolerance byzantine is pretty much covered, since they have to produce a higher ranking hash but synchronicity, is more like: ensuring that everyone has received the correct state at by a top limit which is usually called δ from delay to mitigate the network latency, we have the buffer, since then the δ is bound by our mining time so if the mining time to produce a block is long enough so the previous one has reached everyone, we are good but we keep a > 2 buffer, to properly ensure bigger network delays what happens when a race occurs? does the network split? define race 2 nodes at each end of the net map producing the block for same height? so as the blocks go into the network, each side sees a different highest ranking fork? if for example the buffer is too small the network is partitioned into 2 sections, and they confirm different forks aha yeah if the buffer is too small, lets say 1, so you just have end tip forks you will hard fork, so network split yeah exactly ouch bad so your buffer size ensures that mined blocks will reach everyone by some bound and they can correctly define the highest ranking fork thats why I kept asking about sequences length I though it would be bound by the hash property but its not so we should define it by actual network conditions you get what I'm saying right? ACTION thinking it is also related to the hashpower not exactly in randomx but since we're targetting monero hash power since you define the target time it should be fine (not easy to attack) the problem is spikes, where the algo adjusts to accomodate so our length should be able to cover those spikes in terms of how quickly the randomx algo adjust to the new hashpower of the network in general we can start extra conservative, with like 10 length buffer, and reduce it as the network hashpower grows yeah that might be good you see now how the two parts come together? yes with norma PoW you can never have that buffer since its always based on higher length but with our ranking logic, since you can't have said hash sequences, we can use that buffer since the hash sequences are also "protected" by the PoW itself actually no that is not true someone can privatelly mine a difference sequence to propose to the network scratch that with the buffer we just assume that "hidden" forks can never be proposed if they over some length not assume, rather enforce yeah there's no other way kinda interesting the implication for what finality means yeap, its a very nice thought provoking space so if we want to be technically and factually correct we are doing enforced finality after a certain depth when the probability of a reorg is low enough, nodes consider things final yy exactly, but instead of assuming that, like in normal PoW, we enforce it is it safe to use the same params as monero for estimating reorg probability? what happens if monero has a deep reorg? hm brawndo should be better to answer this iirc you don't reorg if the parent chain reorgs ok I don't think we can use same params, as monero is pure PoW(shatoshis) so their reorg probability is based on length, while ours is based on ranking which is affected by length obviously ic the way I see it right now, the ranking is more of a same length fork chosing rule in pure PoW, miner can choose to extend whichever of them in our case, an honest miner will choose to extend the higher ranking one my idea was to calculation it for a single block based on monero hashpower then use that to see what the reorg depth should be calculate what for a single block? of a block reorg happening you don't reorg in that case both sequences are valid for monero oh wait you mean in our case? so finding the probability of the second block ranking higher that the current one yep i want to see what the depth should be yeah that calculations is exactly the ones I was mention over and over re: buffer length XD find the sequence length where that probability reaches 0 so we can safely use that length(or a bit higher) as our buffer size well if p = 0.1 which would be huge, then for n = 10, the probability of a hard fork would be smaller than 1.0000000000000006e-10 which is miniscule well, we don't care about the ifs :D upgrayedd: given the hidden forks can never propose after 1 block, how is the case of 2 forks with timestamp collision handled? aiya: can you describe the scenario more? wdym by timestamp collision? 2 miner independently find the next block at the same time and start block propagation to connected nodes well the buffer is >1 block(should be even more actually) to accomodate for that thats the synchronisity problem we are describing so as long as δ (the delay to reach everyone) is less than the time to mine a block we are covered okay, so in this case, will the network decide which block is accounted for and drop the fork with less hashpower since both blocks will have reach everyone, and they know which is the correct sequence should be extended no they keep both buffer is not a single fork it contains both of them but as they grow, everyone diverges to a single sequence is the sequence defined as when the block was received in the buffer? so left end of the buffer will be common for everyone wait wait think of the buffer as the current pool of forks, all extending canonical in that pool, everyone will converge to the same higher ranking fork and when that reaches the length we want, we can "pop" its first block and consider it final so then the buffer will contain just that single highest ranking fork, and we enforce everyone proposing alternative forks to extend that fork at any of its blocks so what we are discussing here is, whats the probability to reorg that fork on each height lets say the fork is 10 blocks so we are looking for the probability to produce a block reorging its last block then the prob to produce a block reorging its last 2 blocks ... then the prob to produce a block reorging its last N blocks I see, working backwards, so how long does the buffer hold the blocks that N is the desired height, where we can enforce finality exactly Hi, in order to broadcast transactions do I need to build from master{ I'm using the latest release but I can see it's quite old and I'm getting a connection error when creating a tx even while my node is fully synced. moco: testnet is getting nuked, so don't bother right now gn gn nuked as in we are prepairing for mainnet? testnet 2 then mainnet !list gm gm gm : @draoi pushed 2 commits to master: 1f9a01b94f: net: combine remove_(..) etc methods into a single remove() method. : @draoi pushed 2 commits to master: 5559e46ba9: chore: standardize debug statements on net/hosts/store.rs gm darkfi/src/zkas/parser.rs:688 maybe this could use VarType::from_str()? is there a reason it's like this? sec haumea: Yes, could be an impl TryFrom cool ty But see there is a catch-all that gives a parser error So make sure that's still there ++ hexchat has disbanded :/ why do we use MERKLE_DEPTH_ORCHARD as u64 in darkfi/src/validator/fees.rs:50 is that a mistake? in other places we use MERKLE_DEPTH You can see they're different types const MERKLE_DEPTH: u8 = MERKLE_DEPTH_ORCHARD as u8; yeah but it's being converted to u64 from usize What's wrong with it? The accumulator is u64 Anyway that's just an arbitrary number, those fees don't really represent reality nbd, i can leave it there is MERKLE_DEPTH_ORCHARD: usize and MERKLE_DEPTH: u8 = MERKLE_DEPTH_ORCHARD, but we use MERKLE_DEPTH_ORCHARD as u64, so why not just use MERKLE_DEPTH as X wherever it's needed? Unnecessary casting maybe MERKLE_DEPTH should be non-public internal to merkle_node.rs since it's only used there Can be, yeah ++ It's probably not needed now, but I kinda prefer having things as const That allows you to sometimes make a const fn Which is good since you can evaluate things compile-time ok yeah it's nbd, just wondering Sure, good that you ask :) when i see something weird looking it's probably deliberate :D ;) haumea: https://eprint.iacr.org/2024/200.pdf nice, should i read it? their logic is: number of zeroes at the front of its hash this is what bitcoin does so they for absolute best blocks aha yes i was actually wondering about this btc does it for choosing single blocks on same height not chain selection chain selection is still longest chain I'm wrong? I think the number of zeroes is just an analogy(?) You don't do that in practice e.g. converting to a hex string and count zeroes upgrayedd: no it's not longest chain, it's the most work aha the bits you were sending yday? yep ok so that is absolute best blocks so last night i was thinking whether we want a lottery or not while we do "relative" best blocks the downside with a lottery is that sometimes the actual winner loses or the otherway around too early to think which is which this means there's more volatility which means a lower expectation however sometimes you want smaller hashpower to win proportionally wdym by lottery? because we use the hash not the target wait wait if we wanted it to be absolute, we could use the target then the hash we use the randomx hash distance from max32 which is the lowest target yep that's a lottery so our ranking is absolute best block for same height the target is what counts, not the hash since lower randomx hash means harder block the hash doesn't matter actually, it's just that we mined a valid block but by using the hash (not the target), we make it a lottery where blocks with smaller difficulty can sometimes win proportionally idk if that's desirable or not aha since you can produce a higher ranking hash, from a smaller target yes exactly then we can use what I was saying yesterday distance from its current target yes and if the targets match, then use the hash to break the tie but that will still apply we have to rethink this then with the hash breaking it doesn't matter how it's implemented since it's the exceptional branch hash breaking = using the hash to break ties ok but how can we apply it to a sequence? for blocks at same height thats easy: first we compare the targets if the targets match, then we compare the sum of the hashes if they don't match, we keep the lower? yes ok so we have forkA so always the fork with the smallest total target wins in order for forkB to be accepted, should it have every target at position N lower than forkA? if they match, then a random one is picked and when they match, its hash should be further distance from target? not every target, just the sum we don't pick a random one ok so the general rule is: unless you don't want to allow a fork to suddenly overtake another one but then you will get a lot of ties yeah but we break the ties with the hash lets recap: it's better imo to use the sum(distance(target, max)) you can even square the distance if you want wait so that smaller targets have higher weight the problem in this is that it will always be true for the tip you need a tie breaking rule yes tie breaking rule is which has bigger sum of hashes the hashes are random anyway well not completely, that's a wrong statement lol nw ok yeah it sounds more concrete since you pretty much use the same assumptions as normal PoW what do you think about squaring the distance(target, max)? it favours smaller targets much much more but for ties, instead of choosing whichever and hope its the one getting extended, you can provably chose one yeah squaring is a good option squaring prevents reorgs since these targets will probably be very close to one another so sum ties would be very common we could even cube it lol nah too much to total recap: block rank = tuple of its target distance from max and its randomx hash distance from max fork rank = tuple of sum of squared block target distances and sum of randomx hash distances i would just compare randomx hashes directly where? I was going to write: when comparing forks, first we compare the first sum, if the match, we compare the second lowest wins when comparing the target distance from max, we are favoring the forks that are harder to mine but if 2 forks were equally hard to mine, then 2 miners both created valid forks where those hashes are exactly in the target ranges doesn't matter, it was just luck so you just pick 1 of them, it doesn't matter which one when mining, if you get a lower hash it doesn't mean you did more work because you stopped once you had a valid solution and you start mining the next block it matters which one you chose if we don't have a rule, nodes can drift and hard fork while we want "honest" nodes to follow the fork choise rule i mean you can pick lowest or highest, doesn't matter as long as it's the same if they don't they ain't honest you don't need the distance from target or max aha yeah true sum(a) > sum(b) => a wins just simplifies the logic slightly so same for targets? just sum the block targets no target matters since we want to emphasize lower targets yeah lowest sum wins you must calculate sum( distance(max, target)^2 ) ok so to recap(again) because if you do sum(distance(max, target))^2 or distance(max, sum(target))^2 .etc then they are not additive whereas sum(distance(max, target)^2) is additive ain't sum(target) additive? yes but you want the smallest to win, so that calc should additive aha now I got you rank((a1 a2 ...)) = rank((a1)) + rank((a2 ...)) btw you might need big uint since adding a bunch of 32 byte values might exceed 32 bytes, and squaring doubles the size yeah we already use biguint :D no overflow issues here no worries just clarify this for me: rank((a1 a2 ...)) = rank((a1)) + rank((a2 ...)) we want this to be additive not essential but nice to have, no? aint that hold true for: a=target and a=distance(max,target) and a=distance(max,target)^2 yeah additive is good to have I don't dispute that XD where is the sum in those 3? its the inner stuff a1 a2 etc rank((a1 a2 ...)) = rank((target1 target2 ...)) or rank((distance(max,target1), distance(max,target2))) or rank((distance(max,target1)^2 distance(max,target2)^2 ...)) arent all 3 additive? (max - target1)^2 + (max - target2)^2 + ... (higher rank wins) target1^2 + target2^2 + ... (lower rank wins) yeah they are reversed on who wins, but the additive property still holds true ah yeah that's true so we can just do sum(target) and lowest wins they are the same, it might be better and for ties: sum(hash) and again lowest wins ++ simplyfying the calculations to the bare fucking minimum kiss at its best yeah great wait tho I'm wrong thats for same length forks i need to go shower cos got a train in an hour in a real world scenario the nodes wil always try to mine the lowest ranking fork therefore always extend genesis so we never progress XD so yeah we need the distance so we grab the highest ranking fork not the lowest so doing (max - target1)^2 + (max - target2)^2 + ... is probably the best hmmm well i got ircd running, will check back in a bit the reasoning being: fork with less blocks will always have a lower ranking, therefore winning so we never expand not gud correct max - target doesn't have this problem haumea: and with squaring we enforce smaller targets right? yep well it disproportionately favors smaller targets with the idea being that it will be harder to trigger the reorg between forks kk will start with the code and then will update the docs and fix tests draoi: can you fix lilith and apply clippy stuff for the p2p latest commits? Greets гм : @draoi pushed 1 commit to master: 9d685f408b: lilith: migrate whitelist refinery to new HostState logic : @draoi pushed 1 commit to master: fd728f4324: darkfid: update to new channels() API make clippy i will deal with in a bit, just working on a commit as long as it passes its good, lints can be chored later ++ draoi: there is a death loop with manual peers oh damn src/net/test.rs tests manual connections but no death loop has emerged, any tips to reproduce? the node is constantly try to connect to the peer for some reason can you share logs? I just started contrib/localnet/darkfid-five-nodes ok ty and one of them stuck in that death loop mainly node3 (indexing from 0) I just got this: [INFO] [P2P] Connecting to manual outbound [tcp+tls://0.0.0.0:48442] (attempt #4934195) woah infinite times had to manually kill it with fire on it btw haven't pushed the latest consensus changes, so if the nodes fork don't bother :D : @draoi pushed 1 commit to master: 36ecfb18e2: manual_session: fix death loop... should work now inshallah hooray :D hello, i am trying to airdrop, but getting io error unexpected end of file. should i just wait for the next testnet? (apologies if this is the wrong channel to ask this) wow: yes upgrayedd: ok thank you test test back Hi! GM gm ash ash: better to use testbot on #random, as to not polute this convo got it (y) !topics !topic test Added topic: test (by upgrayedd) !list Topics: 1. test (by upgrayedd) !del 1 !topic test suite status Added topic: test suite status (by aiya) upgrayedd: it's !deltopic !deltopic 1 Removed topic 1 !list Topics: 1. test suite status (by aiya) dasman: noice !topic consensus updates Added topic: consensus updates (by upgrayedd) gm greets hello sup Hi holla hey hi salud !list Topics: 1. test suite status (by aiya) 2. consensus updates (by upgrayedd) !start Meeting started Topics: 1. test suite status (by aiya) 2. consensus updates (by upgrayedd) Current topic: test suite status (by aiya) !topic net update Added topic: net update (by draoi) make test is stuck in a loop at [INFO] [P2P] Connecting to manual outbound [tcp://127.0.0.1:53208] (attempt #XXXX) i think it was just fixed oh, not merged yet? should be fixed on mater master 36ecfb18e261d887484dc1f62b461fd1be470f3c i had it this morning as well, but was following log here and apparently was just fixed ok thanks, will check I ran make test sat or yesterday and it succeeded this bug was introduced yday and fixed today (took quite long tho) loopr: doubt(x) a consensus test is broken were you on latest master? failed for me at master yy nw will fix it plz click next if finished !next Elapsed time: 3.4 min Current topic: consensus updates (by upgrayedd) ACTION clicks but nothing happens well it just happened so must've worked lol ACTION you have to type it ok so re: consensus ACTION waves hands (consensus) with great efforts we managed to create something real cool regarding forks handling noice :) its pretty much similar to PoW, but with some minor tweaks here and there the main difference is the usage of a buffer, in which forks leave until we can considere them final i didn't know that was possible but it makes sense thinking about it now, I have locally latest changes, testing with 5 miners in a 60s mine block with 6 blocks buffer biggest reorg I've observed: ACTION drum rolls :D 1 block XD finality is a really useful property based obviously take my testing with a giant grain of himalyan salt So how do we handle reorgs practically? Blocks until finality: ... Est time until finality: ... wen testnet brawndo: so imagine we have a pool of forks all these forks extend canonical, and when we mine, we only extend those once a fork reaches a size, the security threshold(aka # blocks confirmation), we push its first block to canonical and purge all forks not starting with that finalized block so effectivelly, we have a queue of blocks to be finalized and reorg only happen inside the buffer cool nice so based on PoW properties and hashes, we can safely asume that honest nodes will always try to extend the fork in the buffer with the highest *rank* Cool will update the consensus doc to have it in writting Yeah get that stuff reviewed :) speaking of: hanje-zoe, will you update/add-missing formal stuff? sure thing this is the public docs on the website yes? one change I did from the stuff we discussed earlier how do we know what the buffer length should be? is that both sequences use the squared distance well, we need to calculate the probability we discussed yday loopr: doc/src/arch/consensus.md ++ probability to reorg N length (or same path on darkfi book) I'm 35 blocks in, everyone converges correctly, with max 1 block reorg in the forks pool what's an acceptable limit for reorg? but the number 6 I used is somewhat arbitary i mean probability based on btc # confirmations xmr uses 10 oh btw, what we do is/should be called enforced finality hanje-zoe: the length for which the probability reaches ~0 for example over 5 years, should the probability of a reorg bigger than the buffer size be 0.025 (2.5%)? thats high tho ain't that depended on hashrate? ofc but you wanted to pick a good depth so we will have to estimate a bunch of things but to start doing that we need to set the initial parameters such as the p value we want to use aha so you calculate in reverse right? by saying the probability is that, what my setup would be i mean saying the probability should be 0 doesn't mean much because anything is possible in an infinite universe lol so probability of a hardfork over 5 years or however long we're targetting true, ok we can discuss it another time, aka when we are going for the formal full writting could be 10 or 20, but needs a time limit true we create a model then plug in different numbers to test that's better since then we will also have a somewhat clear picture of our net hashrate this is similar to token stuff i guess yy we can do that, python is our friend anyway thats all, I will finish up consensus and push, then I can "finally" focus on the sync and drk week started with a bang :D !next Elapsed time: 14.6 min Current topic: net update (by draoi) Good stuff so i pushed a change that fixes deadlocks/ race conditions/ excessive use of rwlocks in the net code db92a9e3dc4fe4be9281666a7ba7a0ee91764251 i'm working on another update that will simplify the host/store.rs API a lot more hopefully net will be more stable now Great Will that also help with roaming reconnects? I was thinking, maybe "time with 0 connections" can be a variable influencing the peer lists as well not exactly, but at least we are not deleting whitelist and anchorlist entries any more (made possible by the improved/ safer handling of hostlists) So you don't block peers in edge cases so rn the only hosts that get deleted are the greylist entries that's a good idea re: time with 0 connections with that we could switch off the refinery so there would be 0 deleting of potentially valid peers Yeah bcos I remember this happened to someone Or something similar at least yes that was a blocker for seamless re-connect i'll implement that Sweet that's it from me !next Elapsed time: 4.5 min No further topics gg glhf everyone I'm still waiting for the zkvm review report Can share the notes if anyone's interested though can I run a single test with `make test`? tried `make test test_name` but ran all loopr: For that you can use cargo directly e.g. cargo +nightly test --release --all-features test_name Can specify also --lib or --package ah ok, cool thanks But yeah make sure you're using --release and --all-features forgot to add +nightly to cargo, that's why it was failing (duh) draoi: lmk when to update the nodes will do, it should be stable on latest master, but if you'd prefer to wait until we have seamless reconnect that's fine too okay will wait, tyty https://parazyd.org/pub/dev/random/zkvm-audit-notes.md loopr: on my setup, I have to define cargo release version for the make to run without errors - make CARGO="cargo +nightly-2024-02-01" test ++ !next Elapsed time: 6.0 min No further topics Anything else? event graph debbuging tool is ready to use on master neat ah that's great called deg :) loopr: what you up to? got a 1st version of the wif task will push today or tomorrow for a 1st review ok well not the full implementation tho but as it's the 1st task, soliciting early feedback before moving on Nice you know zkas has a constants section but the elliptic curve generator points are hardcoded there's a zcash function find_zs_and_us() to generate the data for using them in zk that's an importantish todo i assume this is a pointer for me, but i can't connect the dots yet on those names, so will try to make sense ACTION bbl ah ok no rush, when/if this interests you just tag me here and i can guide you oh so it's not related to wif it's my task but didn't get round to it yet yeah cya in a bit, thanks for the updates cya all !end Elapsed time: 6.5 min Meeting ended bye thanks everyone o/ loopr: you know what a generator in EC is, right? roughly so right now the constants usable in zkas are hardcoded there's no way to provide your own but we should be able to change it so .zk proofs can use any generator set in the constants section that's 1 of 3 remaining changes to crypto/zk then it's finished (the others being adding sparsetree which I'm doing now, and fixing DAO::vote() which depends on sparsetree change) ah the constants don't work with zk - coz the curve is different? no they are literally hardcoded right now, nothing to do with the curve so y our own https://github.com/darkrenaissance/darkfi/tree/master/src/sdk/src/crypto/constants/fixed_bases these are the only ones we support now https://github.com/darkrenaissance/darkfi/blob/master/src/sdk/src/crypto/constants/fixed_bases.rs ok https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L625 lol you see they are hardcoded? it should be possible to specify the (x, y) for the point directly in .zk and if it's a value we have in the cache, then load the Z/U values, otherwise generate them with find_zs_and_us((x, y)) https://github.com/darkrenaissance/darkfi/blob/master/src/sdk/src/crypto/constants/fixed_bases.rs#L115 kk nw tho, just sth that needs to be done and it's a kinda self contained task gotcha hanje-zoe: Hi! Do you have more context about the contribute point (28) Tutorial creating a ZK credentials scheme. ? hi ash, do you know how sapling payment scheme works? it's like that but inside the coin you have variable attributes which can be selectively revealed to authenticate with a service so the tutorial would cover wasm rust contract, writing the zk proofs, deploying .etc ok afk for a bit, cya later I imagine that you maintain those values private and send a proof instead, proving some conditions of the variables attached to the coin. right? hanje-zoe: I don't know what it is, I'm looking at it. And the credential scheme it is build upon a commitment scheme. It reminds me of the Identity commitment of the Semaphore protocol. hanje-zoe: Btw, yeah no worries, we can chat later ;) test test back ash: have you ever heard of blinded credential schemes? it's similar to that 14,99[17:38] 99,99 11,99hanje-zoe:99,99 you issue a credential which commits to several attributes. Nobody can see them, but you can authorize a service to access certain ones or even prove statements about them to operate the service 14,99[17:39] 99,99 11,99hanje-zoe:99,99 for example imagine a p2p forum, and you want to delete a post. either you must be the author of that post or be an admin or sysop. So you would prove the statement: 14,99[17:40] 99,99 11,99hanje-zoe:99,99 post_owner == credential.uid or credential.user_level >= admin (hard for me to msg rn, on a train) Good, I get it. It sounds a interesting point to investigate further. I have work already with the abstractions of zk-snarks, but my knowledge of math is limited. Do you think that lacking that math background is prohibitive to this task? I intend to work the close the math gap though Not math, just general darkfi tooling knowledge A lot of stuff like deploying contracts isnt documented yet You have to read the examples Cool, I'm really interested on working to complete that task If it is just learning the tooling it makes me feel more confortable yet Be my guest, feel free to shout here. Already have experience in smart contracts and particularly working with groth16. So this sounds an approachable challenge. https://darkrenaissance.github.io/darkfi/dev/native_contracts.html Title: Native Contracts - The DarkFi Book https://darkrenaissance.github.io/darkfi/arch/smart_contracts.html Title: Smart Contracts - The DarkFi Book Thank you, any guiadance I would appreciate <3 https://darkrenaissance.github.io/darkfi/zkas/examples/voting.html Title: Anonymous voting - The DarkFi Book I'll add a page later today with useful info on writing zk files Bookmarks added (y) ok, I will attentive Also check the spec in the book got it yoyo sup : @skoupidi pushed 1 commit to master: 7010aae22e: validator: changed ranking logic haumea: I pushed the rank logic changes, check and update the doc accordingly will remove the vrf from PoWReward and fix the tom miners at ~200 blocks, 0 issues, max 1 block height reorg s,fix the tom, fix the tests tom nice ty gm : @parazyd pushed 3 commits to master: 551b96d4f9: sdk/crypto/blind: Don't use generics for fn random() : @parazyd pushed 3 commits to master: 0de97d0db3: chore: Update crate dependencies : @parazyd pushed 3 commits to master: 195c477caa: chore: Clippy lints haumea: around? gm gm gm oki so inside zk/vm.rs, where we load the witness values line 682 onwards, i want to witness the SparseMerklePath since PathChip::from_native(..., path), i will witness PathChip which is different to the other chips where they are loaded at the start of synthesize() in zk/vm.rs : @skoupidi pushed 2 commits to master: c9e2cc0a42: .github/workflows: nightly is back on the menu boyz : @skoupidi pushed 2 commits to master: 17f928db22: lilith: fmt draoi: I still get the [P2P] Connecting to manual outbound death loop in test : @skoupidi pushed 2 commits to master: a3a747df39: contract/money/pow_reward: removed obselete ECVRF : @skoupidi pushed 2 commits to master: e8885e629f: darkfid/tests: fixed block sync test logic ah ffs, looking into upgrayedd: darkfid-five-nodes or do you mean make test? make test can't really see which one is it ok ty since it gets into the loop and gg logs : @draoi pushed 1 commit to master: 2c15d0ad38: manual_session: delete 2nd death loop... https://www.youtube.com/watch?app=desktop&v=XspDkqEtWFE Title: 究極の謝罪を競う「土下座選手権」開催(Apology Olympics) - YouTube lmao! : @zero pushed 1 commit to master: 547ffe2144: add book page on writing ZK proofs with zkas, includes info on debugging ash: ^ as promised haumea: Excelent! (y) puzzled and afflicted I couldn't get my env to run tests yet I installed nightly-2024-02-15, nightly-2024-02-01, and also nightly-2024-01-15, but I still get this error[E0658]: use of unstable library feature 'stdsimd' Do I need to go back even further? loopr: make CARGO="cargo +nightly-2024-02-01" test looper: git pull and use current nightly it works 0de97d0db3ddf277a851b4dd4190fb5aff159658 and you can verify with c9e2cc0a42d917f1fd80039895b3cc0195aa591d so the nightly workourd is not needed anymore until nightly breaks again, cheers :D rebasing and checking hey upgrayedd how do I remove the fix we made in the Makefile error: Your local changes to the following files would be overwritten by merge: Makefile git restore Makefile thx it's so easy when you know how :D don't forget to update your nightly using rustup update looks good updated darkirc : @zero pushed 1 commit to master: 96a2e0b65f: book/wallet: add lessons learnt from mozilla XUL failures test test back was d/c gm gm up to chapter 8 of the Rust book (skimmed 7) are there any low hanging fruit tasks that aren't too complex? gm deki: zkas/parser.rs, the match on token str, instead VarType could have a method to be created from a str brawndo: when we call PathChip::from_native(path), it returns a PathChip. So when witnessing a path, do we store the PathChip, or just store it unwitnessed then construct PathChip later when doing the instruction? from_native() witnesses the path So you store the chip and use it where needed ok ty np rather than storing all the configs in a vec enum in zk/vm.rs, we could instead just store them in the struct directly that way we don't need the functions which search through the vec if we need a vec, we can just make a fn which returns the vec of chips see darkfi/src/zk/vm.rs:536 see also ecc_chip() It's unfinished so just leave as-is What would be better is to do work on the optimisation algorithm ok in your example circuit, path is not a Value::known type, just using the Path type directly should i use the .map() trick to get the value from inside it? (for prover_witnesses when used by the API user, storing the Path inside a Value) I don't understand I suppose you can, yeah i think the issue is that .from_native() takes a native path, then wraps the values in Value::known() ah a Perhaps then they can be Values outside And not wrap in there we could do Witness::SparseMerklePath([Value::known(x1), ..., Value::known(xn)]) So you could maybe turn PathChip.path into Values then make a conversion function for crypto SDK's smt Path to I have no idea if/how that works Path is just an array of Fp, so i could make a conversion fn ez Up to you, do whatever you think is the best solution But I cannot guarantee it's safe/sound in zk ok yeah looking closer this seems to be it ACTION head about to explode lol :D haumea: okay looking into zkas/parser.rs, are you talking about token.as_str()? figuring out what the problem is, is the exercise I see, ty brb b : @draoi pushed 1 commit to master: 48a5dc1b2b: net: simplify and reduce Hosts API by introducing HostContainer... https://en.bitcoin.it/wiki/Wallet_import_format Title: Wallet import format - Bitcoin Wiki manually hashing the examples with sha256 or with my current rs code gives me the same result, but it does not match the example on the page any idea what I might be mising? s/mising/missing manually: echo -n " | sha256sum loopr: is this related to #dev? yes, coz that's my current task loopr: oh 17? did you check if the example is mainnet or testned address and.or corresponds to a compressed public key?? the example prefixes with 80, so it's mainnet, and it does not have a 01 at the end, so it's not a compressed pukey upgrayedd: yes, task 17 you double hashed right? well I am printing step by step, as in the example send the pastebin to check so I would expect `echo -n "800C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D" | sha256sum` to print 8147786C4D15106333BF278D71DADAF1079EF2D2440A4DDE37D747DED5403592 as in the example But I get e2e4146a36e9c455cf95a4f259f162c353cd419cc3fd0e69ae36d7d1b6cd2c09 my rs code prints the same ok in a min You shouldn't be hashing the hex, it's just a representation of the binary format echo -n "800C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D" | xxd -r -p | sha256sum aha! didn't even know this xxd tool double thanks brawndo yw thats_why_he_is_the_goat_THE_GOAT.gif https://yewtu.be/watch?v=465GqT_EuUA Title: 429 Too Many Requests lol lol++ in https://en.bitcoin.it/wiki/Wallet_import_format, the key is in hex format Title: Wallet import format, - Bitcoin Wiki what if the key is in bs58, like darkfi's `SecretKey`? : dasman reassigned task (kohufk): indicate unconfirmed tasks in italic to @dasman I opted to try decoding bs58 first, and then continuing if it failed. downside ofc is performance loss everytime it's not bs58 encoded https://codeberg.org/darkrenaissance/darkfi/pulls/250 Title: #250 - WIP: First iteration of task 17, WIF formatting - darkrenaissance/darkfi - Codeberg.org incomplete, it's just a request for feedback to sense if things are going into the right direction please review, be thorough, severe but fair - i'm learning gm gm gm : @draoi pushed 1 commit to master: a5c756bb1c: store: cleanup... gm loopr: your commit message is bad you made this just for secret keys, but as said in the task, it's for *all* user supplied input such as DAO bulla or public key why are you returning Box? we have a darkfi error.rs also remove all those crates you added to Cargo.toml gm hanje-zoe: I think it's fine to implement a trait called `Wif` which should then be able to take any 32-byte input yes doesn't even have to be a trait, just bytes + bytecode Trait is better IMO since it "extends" functionality And we'll be more explicit with types ok So you'll know, when coding, what you're dealing with Just bytes are too arbitrary You mentioned this already, when you were making the Blind types and similar yep makes sense the issue with is esp when refactoring, it's easy to swap things around. if everything is pallas::Base you might make such a mistake but not notice it because the values are 00...00 (for example) but yeah in general types are gud loopr: It should also use BLAKE3 instead of SHA256. Then you also don't have to double-hash, it's enough to hash once. I think we also don't need hex encoding, you could only use base58 Then for encoding/decoding, you should implement Display and TryFrom hanje-zoe: I want to do some cleanup in script/research. Can I delete: * antisapling * bulletproof-mpc * last_man_standing * vm-db-types * wif * homomorphic_encryption . : @parazyd pushed 1 commit to master: 4762c51f35: research/tfhe: Dark market implementation using FHE hanje-zoe: Merry xmas :D ^ !list No topics maybe we should keep bulletproof-mpc, and maybe last_man_standing since they have code i tried `make plain` you're so fast it's unreal ok plain is the normal matching, and fhe shows the FHE version Yeah there's various algos There's versions using all cores (parallel) what's -improved mean? It's a faster algorithm, but harder to read/understand aha See the difference between plain and plain-improved a lot of people reckon this will improve Yeah it's getting better And zama.ai is doing really good work i have the TFHE pdf but didn't study it yet but i get the general idea of using the circle [0, 1) Likely torus will be the one that succeeds At least it seems that most work is being put into it : @parazyd pushed 2 commits to master: 1297ff7b07: src/time: Implement and use {over,under}flow-safe API : @parazyd pushed 2 commits to master: 30ebc50b3a: chore: clippy lints brb b loopr: https://darkrenaissance.github.io/darkfi/dev/dev.html Title: Development - The DarkFi Book why use arrayref! when you can just do array[0..4]? : @parazyd pushed 1 commit to master: 8778d57b42: chore: Update crate dependencies : @parazyd pushed 1 commit to zero_cond_gate_simplification: 7b5b9361f5: zk/gadget/zero_cond: Simplify ZeroCondChip PLONK gate... : @parazyd pushed 1 commit to master: 1215a4e701: zk/gadget/zero_cond: Simplify ZeroCondChip PLONK gate... : @parazyd pushed 1 commit to master: 258849f79e: zk/gadget/zero_cond: Simplify ZeroCondChip PLONK gate... : @parazyd pushed 1 commit to smallrangecheck_assert: 8e3edcb3cb: zk/gadget/small_range_check: Assert that `range > 0` in range_check() : @parazyd pushed 1 commit to master: 8e3edcb3cb: zk/gadget/small_range_check: Assert that `range > 0` in range_check() : @parazyd pushed 1 commit to scalar_witness: e3634955ab: zk/vm: Witness Scalar values at the time of witnessing rather than time of use. : @parazyd pushed 1 commit to master: e3634955ab: zk/vm: Witness Scalar values at the time of witnessing rather than time of use. : @parazyd pushed 1 commit to remove_unused_literal_enum: 60aa076763: zk/vm_heap: Remove unused Literal enum : @parazyd pushed 1 commit to master: 60aa076763: zk/vm_heap: Remove unused Literal enum : @draoi pushed 4 commits to master: d19f962830: net: replace downgrade, upgrade and blacklist methods with one method: move_hosts()... : @draoi pushed 4 commits to master: e46cb5d55a: chore: cargo fmt : @draoi pushed 4 commits to master: 4883b2f213: Revert "chore: cargo fmt"... : @draoi pushed 4 commits to master: fe18477578: chore: remove lingering merge artifact + cargo fmt : test test back : test back gm gm gm gm gm : @parazyd pushed 1 commit to condselect_unittest: 69ec231a14: zk/gadget/cond_select: Implement gadget unit test : @parazyd pushed 1 commit to master: 69ec231a14: zk/gadget/cond_select: Implement gadget unit test (I'm doing this as PRs so we can reference them in the audit report) : @skoupidi pushed 5 commits to master: e8d5b312aa: validator: clippy chore | darkfid/task/miner: minor logs added : @skoupidi pushed 5 commits to master: 99d149dd9b: darkfid/tests: forks sync test added : @skoupidi pushed 5 commits to master: c6637029fe: darkfid: remove second miners p2p... : @skoupidi pushed 5 commits to master: 9d1123bec7: net/message_subscriber::MessageSubscription: new fn clean() added to purge existing items from the receiver channel : @skoupidi pushed 5 commits to master: 39223c6a98: darkfid: fixed fork sync issues Hi! haumea: hanje-zoe: I have been reading through the documentation and the wonderful odyslam blog article, and got an idea about how zkas and contracts work. My strategy up now is to first work on the idea of the scheme and implementation on paper, once we agree that's correct then move into the actual implementation. And lastly make an tutorial about it. sounds good ++ So far, I have learned that a sm follows some general implementation steps: (1) define the function selection, (2) Deserialize the metadata to verify signatures and zk-proofs (3) Implement the logic and the state trasitions (4) Commit and update the state. hanje-zoe: Great (y) hey devs, I am facing again similar error, happened in docker build sometimes, worked in x64 for commit 96a2e0b65f7, but: for arm, even on hw(armv8), after removing "pathfinder" in Cargo.toml that breaks arm build, the problem is back :( https://0x0.st/Hh8U.tail.txt harness.rs again, test tests::sync_blocks ... FAILED any tips ? please ? root: did you pull latest master? latest commits fix that test that one is from 2024-03-05, I can try more recent, but (1) mentioned commit worked for dockers for x64 / not even on pure armv8 hw (2) that would require again to run : "find working buid" as you usually do not have working master. so I may try later, before that, any "other tip" please ? latest commit 39223c6a98aa53402fac8696d493c352c3541755 should be building on x86 platforms for exotic ones it depends wether or not dependencies have been ported there some hacks/ports may be required ah ok, so arm is "exotic". I believe wrong deps would fail sooner than with the test (and this particular one causing problems earlier as well / docker + SHM). https://0x0.st/Hh8U.tail.txt is the output tail. FYI it was working including docker builds (and tests) in February for arm and x64. so what hacks you mean are required for the changes in last month ? I am drinking wine, but I am not sure the alcohol level is the in my blood is the cause of the misunderstanding I feel ;) arm is pretty much standard these days, was talking for arch like riscv, were not everything is "native" yet, so some extra handling is required if it builds, then errors are usually code bugs good point. the problems show in different archs / dockers that would not be discovered otherwise. I may try some memchecks / valgrind later, just looking for quick tips if anybody has. I discovered "c" bug once in some old cryptocurrency via arm build long time ago, crazy times ;) > loopr: It should also use BLAKE3 instead of SHA256. Then you also don't have to double-hash, it's enough to hash once. intrigued but also confused. intrigued: how can a different hash function yield the same hash? confused: the examples there explicitly use 2 sha256 funcs, and notably: > 5. Take the first 4 bytes of the second SHA-256 hash; this is the checksum. but if we skip the 2nd hash, how can the checksum be correct? in fact in my first testing the 2 solutions (a: 2*256 sha256, b: blake3) largely are identical, but the last 4 bytes do not match a: 5HueCGU8rMjxEXxiPuD5BDku4MkFqeZyd4dZ1jvhTVqvbVdEsFw b: 5HueCGU8rMjxEXxiPuD5BDku4MkFqeZyd4dZ1jvhTVqvbTLvyTJ > why are you returning Box? we have a darkfi error.rs yep I saw that but fns I am using (e.g. encode_b58c_plain) in the implementation were returning Box, and it hadn't been able to match those to darkfi errors I guess I'll have to try harder > also remove all those crates you added to Cargo.toml loopr :: SHA-2 vs SHA-3, I support security upgrades don't get this part, are we not allowed to add any new crates to the project? is there another way to use external code without adding crates to Cargo.toml? do you even mean we have to implement everything by hand? root: sure, so do I, but don't see context sha256 vs blake3 loopr: SHA256 and BLAKE3 wouldn't yield the same hash loopr: It does not matter, the logic matters. loopr: BLAKE3 does not have the length-extension problem like SHA256 does, so that's why it is enough to hash once. loopr: For base58 we already have a dependency called "bs58" gm gm I'll be afk until afternoon brb https://www.dolthub.com/blog/2024-03-03-prolly-trees/ Title: Prolly Trees | DoltHub Blog this would be good in the future (post-mainnet) for wallets !topic updatable SMT Added topic: updatable SMT (by hanje-zoe) b gm : @zero pushed 1 commit to master: 376784af2e: zkas/zk: add sparse_tree_is_member() opcode ^ could i get a review on this? can we change HEIGHT in SMT to DEPTH? for merkle stuff, we always use depth so would be more consistent naming also i put the type(def)s for PathConfig/PathChip in src/zk/vm.rs, lmk if they should instead go in src/zk/gadget/smt.rs ... i wanted to leave smt as generic as possible without specialization ACTION will now patch DAO::propose()/vote() !deltopic 0 Removed topic 0 brawndo: you mean the resulta actually doesn't have to match the ones in the bitcoin wiki page? We just care about the process? guess it makes sense, I assume a darkfi key outsider darkfi is meaningless so interoperability is not a goal brawndo: tw we have bs58 and not bs58 sorry, typing on mobile is awful btw we have bs58 and not bs58check in our crates, they are apparently not the same bs58 yields longer strings loopr: https://docs-rs-web-prod.infra.rust-lang.org/bs58/0.2.5/bs58/#optional-features Title: bs58 - Rust oops - hadn't jumped to optional features, thanks upgrayedd Base56Check is specific to the bitcoin WIF witch uses sha256 you don't need it the task is to create a similar thing using our stuff, not create the rust equivelant we don't use sha256 so by default it will never work for example our algo would look like: 1. take a private key bytes 2. append prefix bytes on the front(I don't think we need/have compressed public keys) 3. Hash it using blake3 4. Take first 4 bytes, thats the checksum 5. Add the checksum bytes at the end of extended key from step 2 6. convert bytes array to string using bs58 encoding ^^ PrivateKey -> WIF Oh WIF -> Private key: 1. Take the bytes of a WIF string using bs58 encoding 2. Drop last 4 bytes (checksum) 3. Drop first byte (prefix) Thats your private key bytes gg easy gotcha Would I had been able to pick this spec somewhere before? you can first do it in a simple rust script(or test) to verify both ways loopr well thats the point of discussing the tasks here I just went following the wiki this is not a spec, I just came up with it although it was discussed more thoroughly with hanje-zoe some mtg ago want me to write the WIF checksum checking also? I think you got the hang of it I see. I think I was too eager to show something quickly instead of first asking I understand, but you see that thats not always desirable in terms of wasted time upgrayedd: yeah I got it absolutely this is used not just for private keys, but for sharing any data in general ++ the trait should be just for a bytes array loopr: wanna race who's gonna make the rust draft first? XD i am gonna participate in a local hackaton in 1 hour...you would win undoubtedly oh, I either underestimate the time to produce a draft, or you overestimated it should be like 10m tops XD didn't have breakfast nor shower yet, but if you like, I can give a shout when I get say 20mins and we can give it a go considering your tume zone ofc s/tume/time sure, although I would probably just do it while you are away XD drafts are disposable, the trait code is the real thing we want so it doesn't matter who did it first ok fine hanje-zoe: "sharing data in general" now makes sense upgrayedd: you have my pubkey? care to send me a msg? I have a superquick one-off q for you... loopr paste it here and will check, might have you under different alias Ok will do later as I actually don't have it on my mobile, duh looprt: https://pastenym.ch/#/e6uUKXbn&key=ca36d34ff4a560b6e9d510bccba698f0 Title: Pastenym loopr pastenym formats funny for some reason, so do some auto inteding on your editor hanje-zoe: feel free to also take a look brawndo said there should be a trait so types can indicate their own prefix byte(s) hanje-zoe: remember this is a draft lol ++ brawndo: for smt non-membership, i've been reading the code but don't see how we prove a value is not in the set certainly if x is in the set, then we can prove it is with the correct path likewise if x is not in the set, we cannot present a valid path but if x is in the set, surely you can present an invalid path that doesn't hash to the root? i think we also need to have the path stored, then check the path corresponds with the leaf and check the leaf = None although maybe i misunderstood, was reading this https://eprint.iacr.org/2016/683.pdf https://eprint.iacr.org/2018/955.pdf so proving that the leaf at a certain index = None => leaf is not in the tree like this https://blog.iden3.io/sparse-merkle-trees-visual-introduction.html Title: Sparse Merkle trees: a visual introduction - Iden3 project blog this is really good https://hackmd.io/@aztec-network/ryJ8wxfKK Title: Merkle Trees - HackMD . . .... loopr: cat on keyboard? message to the future yeah loopr: did you see the code I send? btw I guess it's no problem to "ambassador" darkfi in local events I suppose? not presenting myself as an "official" yet, just talking to people upgrayedd: I saw you sent something but didn't open it yet loopr: lmk if the link doesn't work to resend upgrayedd: the link opens but it kinda stucks in "Paste being loaded" and nothing happens sec I am tho on a VPN through the hackathon pub wifi loorp: https://pastenym.ch/#/JLbEn1HC&key=3d9792c8942f4a12bbe09e93e48382dd Title: Pastenym try this one yep that worked thanks my pubkey: 4rzHWemAB35pLjGZeKeCdGYKRa3ZG5QNRGcrJecwjgU3 loopr: FfrD6FVbQZmbA5TQVxeRra4cAryH8ijC9G3r2BieFzSs loopr: send DM when you add me hanje-zoe: A sparse merkle tree is ordered hanje-zoe: So a non-inclusion proof is proving that a certain leaf is null upgrayedd: sent DM brawndo: so don't we also need to check the position is correct too? the current impls don't use any position aztec uses an smt with a depth of 256 to store their nullifiers it adds slight overhead for update/check inclusion (non-zk) which is good for money since it doesn't impact perf checking non-inclusion inside zk does indeed take 256 hashes though but worth it for DAO::propose() since there's no other way to check coins being used are unspent (currently we dox the coins by revealing nullifier public) if you agree (checking i didn't miss sth), then i'll work on these changes : @draoi pushed 3 commits to master: a5687f973b: store: reject peers that already exist on the white or gold list.... : @draoi pushed 3 commits to master: 094c7f957a: lilith: upgrade to new hosts API : @draoi pushed 3 commits to master: 35a72c81a9: net: fix bug that was causing duplicate hostlist entries... brb : @zero pushed 1 commit to master: 1f95e4f53c: sparse merkle tree example : @zero pushed 1 commit to master: abc318b096: smt.py: add missing check for position brawndo: ^ check this commit abc318 and if it's correct, then i can add it to SMT : @zero pushed 1 commit to master: 667e1505ec: smt.py: modify impl to match the impl we have in rust hey gm gm gm brawndo: ping me when you have a sec hanje-zoe: pong hey branwdo, will discuss more in the meeting, but just wanted to see if i'm on the right track !list No topics aztec uses a 256 level deep SMT, and i think we need to make the position explicit my rough WIP: https://agorism.dev/uploads/smt.diff Yeah so I don't know what is the maximum capacity of the BTree i'm making it a trait so it can be used in wasm with db_set/db_get In our impl they get ordered by index (SmtMemory, SmtDb .etc) What is your question precisely? yeah the SMT is really good in the Path, I want to introduce a new type called Position and add that to the gadget as well so you can say this leaf = None, right now i think you cannot guarantee that with the path Did you try modifying the tests to prove that? It would just be Fp::ZERO (The leaf) but if i give you a path, how do you know it's leaf 5 i've revealed and not leaf 6 (which is none)? i made a python file: script/research/smt.py !topic SMT changes Added topic: SMT changes (by hanje-zoe) ok i'll make that unit test and continue this work until meet later didn't want to disturb your focus, just checking in sec just looking at the code kk ty Yeah the point with a sparse merkle tree is that it should be ordered So given any leaf you should know its position Perhaps your thinking is correct though, in zk I don't know how it would work the issue here is that leafs can sometimes be None though so if i want to prove that a particular position is none, i need to give you the path They're not None (not to do with zk) You're proving that it is Fp::ZERO i mean empty_hash yeah sure 0000, None .etc Yeah so a non-inclusion proof is profing that some leaf is zero i can give you any path for a 0 value to prove a leaf is 0, even if it isn't 0 did i understand correct? or i'm missing sth? Shouldn't be like that // 7 is none let proof = smt.generate_membership_proof(7); // show leaf 0 is none let res = proof.check_membership(&smt.root(), &default_leaf, &poseidon).unwrap(); but .check_membership() doesn't have the position for leaf 0, so how do we know this is for leaf 0? it could be any leaf Yeah I think we need the position ok cool tyty i was sure i missed sth cos it's so well written and read and re-read the code several times :D !deltopic 0 Removed topic 0 !topic mainnet status Added topic: mainnet status (by hanje-zoe) !topic SMT changes Added topic: SMT changes (by hanje-zoe) : narodnik added task (xf5EUY): ability to use task RefIds instead of ID. assigned to dasman gna be offline intermittedly as testing some net stuff gl !list Topics: 1. mainnet status (by hanje-zoe) 2. SMT changes (by hanje-zoe) bruh, I had a mini heart attack, I had some changes in tau and was wondering why I'm not syncing xf5EUY anyway, tau xf5EUY is already done : @dasman pushed 1 commit to master: 66d5e760b9: bin/tau: accept full RefIDs as IDs as well : @skoupidi pushed 1 commit to master: 3a446cecfd: darkfid,minerd: better mining abort signaling between the two daemons oh nice dasman, apologies i didn't see it and refIds weren't working for me was only first 6 chars, but now either that or full refid should work ahh cool ty but can you make it any set of chars? like in git sure for example 1f95e4f, 1f95e4f53c8e or 1f95e4f53c8edbe586 works try running: git show 1f : @dasman pushed 1 commit to master: 5dec33a632: bin/tau: accept RefID in any length you can still use local # ids ++ ty : dasman stopped task (xf5EUY): ability to use task RefIds instead of ID Hi Hello yo gm hey yo GN !list Topics: 1. mainnet status (by hanje-zoe) 2. SMT changes (by hanje-zoe) hihi https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#mainnet-tasks Title: Contribute - The DarkFi Book https://darkrenaissance.github.io/darkfi/arch/arch.html#release-cycle Title: Architecture - The DarkFi Book !start Meeting started Topics: 1. mainnet status (by hanje-zoe) 2. SMT changes (by hanje-zoe) Current topic: mainnet status (by hanje-zoe) i was looking over those tasks (in the contrib page) we are so close to mainnet, we need to push this out there i plan to sometime (after SMT task) this week reorganize those tasks into high and low priority once we can release, we capture market cycle, and can take a breath. it seems we are on the 20% phase I'm focused on darkfid right now, consensus seems to run smoothly without hard forks, every node converges correctly etm. nice, i'm going to fix up the DAO then focus on updating the drk tool so right now I'm mainly focussed on performance, since I see a lot of headroom with regards to how we handle things i want to do a thorough review of the net code as well and apply hardening You do know we'll have to run the testnet for a few months, right? yeah but we ideally should get the code review done at the same time yeah was thinking that the next milestone is testnet rly mainnet is close wrt tasks, but not practically after we finish the dev tasks, we get a code review and do Q&A ourselves (also i'll complete spec, and we write more docs) then there are non-dev tasks, which you're talking about brawndo? It's testing and making sure things work, having a stable working testnet for a few months, etc <0xhiro> happy to assist in non-dev task if there's any hello Oxhiro <0xhiro> hey draoi there is a balance between making sure everything is secure/bug-free which the code reviews (assuming completed) help with other stuff is doing scaling tests ourselves, but we don't necessarily need to delay mainnet for that and can be done at the same time I disagree We should be having a 100% stable testnet ++ Hi! wdym practically hanje-zoe? i mean what are you proposing wrt testing what kind of areas do you forsee would need work? I suggest there is a sweet spot there draoi: i'm trying to get an expectation of deadline because we could miss altseason you want to do some performance testing to make sure things don't come down at the first bull run but 100% stable is a high goal which means significant loss of value for everyone true but we don't want to enter into a reactive mindset and rush things and also we need to coordinate on all the other work to be in sync having a secure mainnet is more important than launch timing ofc, but these things can also take forever and so it's good to have an expected deadline lets say we have a working testnet in 2 months, do we then wait another 4 months running it until mainnet? What has to be done first is a stable testnet that is used and adopted by people and all the main bugs are fixed (there are bugs everywhere) Yes I'd keep it running for more months Should be incentivised as well, so some of that contribution flows into mainnet yeah incentivised would be good for testing incentivized testnet is also an example of how you can use the bull market to drive hype w/o rushing mainnet i think token value is captured only on upside, not pre-release value <0xhiro> yay and u get all those airdrop hunters coming to test hah actually pre-release value is important cos it's pure speculation and can have a lot of upside vs when things are live, reality sinks in incentivized = bug bounty? or how maybe just PoW what are the other blocking missing dev tasks apart from what's mentioned above for initial testnet release? I think the wallet is unfinished yep i'll update drk (mentioned above) chia had a successful pre release hype, where people set up mining machines on the testnet zk gadget tweaks so are we ideally looking at mainnet in december? Likely sooner Assuming no laziness do we want to do any promo, either light touch or heavy push? wdym maybe during testnet time that's not a convo for #dev :D ok just trying to visualize how it will all fit together and what are the missing pieces that need to happen so we get the testnet operational, then revisit this in march we are in march fyi brawndo: should we just pick a code auditor now from the available options and schedule that? draoi: i mean april We can, yeah do you have any preference? i have no idea which is good if anyone here knows any good auditors, let us know No preference, we got some suggestions i could do intro to hexens I'd say we chat them all up and then choose the best offer (and vibe) ++ ok cool, i'm on it Who's hexens armenian outfit seems https://hexens.io/ looks like smart contracts Title: Just a moment... not sure if they do zk pass We need systems not crypto gotcha i was recommended cure53.io which you said looks web, but they do the audits for tor cure53.de thoughts on halborn? https://www.halborn.com/ Title: Web3 and Blockchain Security Solutions | Halborn I mean cure53's main thing is the web, but they might be good would cyvers qualify? cyvers.ai See in 2024 and 2023 they're auditing web stuff seems it was tor browser https://restoreprivacy.com/cure53s-tor-browser-audit-validates-softwares-robust-security/ Title: Cure53's Tor Browser Audit Validates Software's Robust Security | RestorePrivacy (Spoiler: it's all js shit) As-if lol ok well nym used them, said they do rust stuff but don't know blockchain/consensus rex: cyvers so it says web3, but we want a systems auditor who can do rust rather than solidity code As said, let's try to chat with all of them and see who we can vibe with same for halborn ok lets move on !next Elapsed time: 24.9 min Current topic: SMT changes (by hanje-zoe) i have a few sub-topics here !topic updates Added topic: updates (by draoi) > use depth since used elsewhere (consistent) OR we can change merkle depth to height in other places > likewise level starts from bottom, but index starts from the top? maybe reverse the level so it measures the depth rather than the height. approve of using depth? ++ > proposed terminology: level - _depth_ in the tree loc - (level, pos) tuple pos - leaf index, equivalently the direction through the tree idx - index inside db for nodes. Leaf idx = final_level_start_idx + pos node - hash(left, right) or the leaf values approve? lgtm > proposed changes: 1. traitify SMT with optional backends, for example using db_set/db_get inside WASM and support biguint, as well as current one with u32 2. biguint, depth=256 and adding all nullifiers to a SMT inside money 3. make pos explicit (discussed earlier) 4. remove leaves from ::new(), since new() just calls .insert_batch(), the API user can do that anyway Explain? which one? 1, 2, 3, and 4 lol 1. so right now the SMT is an in memory store, but if it stores all the nullifiers (which is needed for checking a coin is spent inside ZK), then it should use the database so instead of calling self.tree.insert(...), instead we make SMT a trait, but you can specialize it with SmtMemory or SmtWasmDb SmtMemory uses self.tree.insert(...) while SmtWasmDb will call db_set/db_get but they use the same algo otherwise https://agorism.dev/uploads/trait.diff.txt I suppose that's fine for 2, the cost of add and checking membership of SMT (not in ZK) is marginally a bit more than the current approach with nullifiers but using an SMT for the nullifiers is much better, so i want to change money to put all the nullifiers into SmtWasmDb with a depth of 256 the index is a BigUint aztec does this https://hackmd.io/@aztec-network/ryJ8wxfKK Title: Merkle Trees - HackMD SMT? SparseMerkleTree? https://hackmd.io/@aztec-network/ryJ8wxfKK#Merkle-Trees-in-Aztec Title: Merkle Trees - HackMD loopr: yep ++ ACK ok 3 is just what we discussed earlier, we need the position when checking non-membership so i'll make that change Yeah that' s something I missed The zk gadget should also get reviewed You can likely modify it as well when you add the position stuff 4 is just a minor change. right now in new(leafs) you do setup for the Smt and then call self.batch_insert(leafs), but i think we should remove that from new() so you call new(), then call .batch_insert(leafs) Sure ok final 2 things > other topics: * why in gen_empty_hashes() does .take(N)? * should we use the leaf value as the pos itself? benefit is easier API and simpler impl / less error prone, downside is we cannot use it as a key-value store (but this can be worked around by storing hash(key, value) as the leaves). for item in empty_hashes.iter_mut().take(N) { empty_hashes is length N so is this a mistake? The leaf value as position should work I guess I don't know if this has any anonymity impacts or not though it doesn't since the path is anon inside zk, but i'm unsure if it's desirable or not makes the code simpler though, but has that downside if you prefer it that way though, it certainly matches my usecase (storing Fp) Yeah we could ask someone on this topic `for item in empty_hashes.iter_mut().take(N)` No reason, likely just leftovers let mut empty_hash = F::from_uniform_bytes(default_leaf); btw this is a mistake Why? since from_uniform_bytes() is used to convert a 64 byte hash to a 32 byte Fp I mean yeah so if default_leaf is non-zero it will create an invalid value It won't, I think FromUniformBytes is used everywhere Perhaps not it reduces default_leaf mod p Could just convert the code to take Fp yeah or 32 byte repr of Fp anyway i'll figure it out Indeed !end Elapsed time: 16.7 min Meeting ended what? you skipped my topic lol oh sry !start Meeting started Topics: 1. updates (by draoi) Current topic: updates (by draoi) ty just wanted to give a small net update, i've made some changes to the net code and it is much stricter and more stable now polished marble i will soon push the change that allows us to have seamless reconnect, and then will do some tidying up then will ask for review that's it from me, made this a general update topic in case others wanted to also share updates working on SMT + SMT gadget, then modify DAO, then DRK (and also net) Awesome ACTION has their tasks on tau, but working on event graph replayer, will probably do read-only key for tau first I made a set of fixes to zkvm, and YT will check the sparse merkle tree gadget and do some optimisations as well is the contrib page the goto place for all about tasks or just a frontend loopr: we need to make the generators for zkas configurable if you can do that yep ok looks like a bit challenging but happy to be challenged it has nothing to do with cryptography, it's a purely code task ah ok !end Elapsed time: 5.9 min Meeting ended btw would it be possible to generate the contrib page from tau or whatever ty all why? we don't want to run loads of tasks and infra just run tau directly I have two comments regarding the meeting, if I can speak yeah ofc everyone's free to speak ok cool I want smth for myself just thought if contrib page is generated might be easier to maintain that's all, thanks everyone (1) Launching an incentivized testnet, then leveraging the hype around it to mainnet. Seems a good middle point, at least seeing it from outside. ash: fyi in future meets, you can add a topic with !topic (2) For and audit I would recommend to checkout https://txpipe.io/ Title: TxPipe.io nice they look decent brawndo: Federico Weill looks like the type of guy you said does good audits development team of the year in cardano, there they don't need intro. They focus a lot in Rust and sm audits. personally I love aiken cool they look good ty If needed I can ping them. hanje-zoe: I'm shy, but i will take the command next time. hahaha . https://github.com/aiken-lang/awesome-aiken?tab=readme-ov-file#tutorialsexamples Title: GitHub - aiken-lang/awesome-aiken: A collection of Aiken libraries, dapps, and resources nice should i ping you on telegram or signal? telegram ok ty cya all, tyvm thanks all, o/ o/ Fede vibes are very nice bye guys hanje-zoe: I can send to you my user by dm i know it np good (y) <0xhiro> dun mind i ask a noob qn but how to run tau? 0xhiro: https://darkrenaissance.github.io/darkfi/misc/tau.html Title: tau - The DarkFi Book but just make BINS="taud" and use tau-python as cli or add this to bashrc: alias tau=/path-to-darkfi/darkfi/bin/tau/tau-python/main.py also add this to ~/.config/darkfi/taud_config.toml : seeds = ["tcp+tls://dasman.xyz:23331"] test test back hello, anyone there? hmm ... so tests::sync_blocks fails when run via Valgrind on x64 after about 35 hours. anybody interested in logs ? hey cheese root: Sure https://pastenym.ch/#/BP1E4g1B Title: Pastenym I think it was the same error as on pure ARMv8 hw (no docker, no valgrind, ubuntu22), valgrind test still running on arm ... root: And what does it mean when you say it's been running for 35 hours? It just took that much time? You should likely be running the rust in --release mode Unfortunately the zk stuff is really slow in debug mode gm in and out of internet for a bit hanje-zoe: ping hey branwdo Hey I'm thinking a bit about IRC and RLN did you know about this history of IRC? https://en.wikipedia.org/wiki/IRCnet#History Title: IRCnet - Wikipedia Would the accounts be a smart contract on-chain, or we'd somehow find a way to keep it separate and irc-only originally IRC was meant to be decentralized IRC is decentralised to an extent Well, rather distributed than decentralised it would be smart contract on-chain, so the IRC would be an app in the wallet able to access the money module to check staking The network is architectured that way since inception originally anybody could join IRCnet but EFnet was created to block that Yes I know i didn't know how do we set the epoch times? algorithmically or by a mod? It can be hardcoded to begin wiht *with 09:46 it would be smart contract on-chain, so the IRC would be an app in the wallet able to access the money module to check staking Yeah alright ++ although in the beginning we can just do it by hand without the appropriate framework i was thinking for the DAO, it would be nice to be able to use the IRC likewise to send money to your contacts Yeah but that's kinda counterintuitive if you're allowing anyone to use any nick You'd need some kind of authentication that way, which compromises anonymity i mean ones you've added manually ah yeah you can have an encrypted session That'd also work well with pqxdh we could maybe make adding DMs easier too, so like a /import command to add the last shared key in a chat Should I work first on pqxdh or rln? btw I'm still not using darkirc, waiting for it to be stable how does xdh protect my DMs? i know about forward/backward secrecy but why does that matter? surely the risk is people get my secret key, xdh doesn't protect against that or does it? You need to do key rotation periodically That can be coded in to be done automatically ok rln sounds more fun https://avinetworks.com/glossary/perfect-forward-secrecy/ Title: 403 Forbidden ty i know you rotate keys *i didn't Yeah you should do that for each session Protects against harvest-now-decrypt-later attacks ic, well DMs are private now but we're vuln to spam attks + we could advertise RLNs ;) Yeah though we aren't getting any spam i'm actually surprised We should only require proofs once an algorithm decides there's too much traffic from some peer or smth i would do 3 tiers It's too intensive to have to generate a proof for every message ok So that can be added to the p2p ruleset Although have to think what happens in the p2p context, since nodes relay other nodes' messages i think it's specific to darkirc protocol since it's a higher level thing (people stake coins, we verify it .etc) otherwise p2p has a dependency on money module I meant that, in the p2p proto of darkirc Not in the lib ++ I'll work on implementing services then We should do account management through NickServ Then you can do everything from the IRC client cool actually it would be nice controlling drk through IRC commands lol like in a private channel doing DAO ops or maping a swap in DM *making a swap ACTION tips brawndo 1 microdrk .etc i also had this thought recently that bots could also be p2p if they were plugins to darkirc like meetbot or taubot good boy tokens haha brb cool 1 DRK = 10^8 GBT lmao root: did you pull master? By the "logs" you seem to be on a commit from 2024-03-05, while the test fix was pushed 4 days ago quit gm o/ : @zero pushed 1 commit to master: 638c441a43: smt: replace FromUniformBytes<64>/[0u8; 64] with PrimeField/PrimeField::Repr brb i'm going to specialize Smt keys to BigUint: https://agorism.dev/uploads/smt.rs.diff tried making it general but it's way too much autism to handle where S::Key: Clone + From + Add + Sub + Mul + Shl + Shr + Ord You likely won't be able to use that in ZK it's an internal type for the DB because the location = (level: F, pos: F), which is converted to the index: BigUint... so i guess it's ok ok brawndo | Yeah though we aren't getting any spam how is spam handled in ircd or darkirc ? there is no spam *tips hat* milady : @draoi pushed 1 commit to master: 24212a6f77: net: enable offline reconnect... hanje-zoe: getting an smt.rs error running `make test` ok thanks : @zero pushed 2 commits to master: f92b55d16e: smt: add terminology doc : @zero pushed 2 commits to master: db67d3bdf0: fix broken smt tests upgrayed: no I did not pull master, but I try (I already answered that question - I need to exec "find working build" as master is always broken) the problem is not with the pull, the problem is the test passed on x64, and not on arm. my guess is some timing / timeout (so much slower arm or valgrind fail) tried on arm hw again to be sure:: darkfi.2024-03-05_96a2e0b6.git.try.compile.on.hw$ # ./target/debug/deps/darkfid-91a8e5bd55c1e197 --exact tests::sync_blocks .... test tests::sync_blocks ... FAILED ... thread 'tests::sync_blocks' panicked at bin/darkfid/src/tests/mod.rs:163:52: ... called `Result::unwrap()` on an `Err` value: ForksNotFound it might seem useful to have AI that answers questions about darkfi chat / darkfi code / darkfi web - anybody interested having that ? Nope root: that test was broken, hence why it sometimes passed while others it failed, use anything after 39223c6a98aa53402fac8696d493c352c3541755 root: do you mean a chat bot for the TG or fyi i'm changing smt so that depth=3 means the tree has 8 leaves (currently it means 3 layers and 4 leaves) Just make sure everything is reflected in the zk gadget :) sure thing will go through that once finished : @zero pushed 1 commit to smt2: 2ef17b46c2: WIP refactor of SMT ^ pushed a branch, will rebase on master once i'm finished dasman ive added seeds = ["tcp+tls://dasman.xyz:23331"] to the taud_config though im still havinig issues connecting to taud ![feature(generic_const_exprs)] Don't use this please It's never gonna be stable SIN: what kind of issues? (please used paste bins) dasman: https://pastebin.com/B8iTJaU9 Title: [INFO] net::seedsync_session: Connected seed #0 [tcp+tls://dasman.xyz:23331][I - Pastebin.com Here is my taud_config: https://pastebin.com/SQvZg4n6 Title: . - Pastebin.com ive uncommented some lines in here, and also removed the two default seed nodes SIN: don't use localhost as external address use this config: https://agorism.dev/uploads/taud_config.toml pull master dasman: I used the provided config, and when I run the ./taud command im getting this: https://pastebin.com/Aa1QnNxR Title: [INFO] net::seedsync_session: Connected seed #0 [tcp+tls://dasman.xyz:23331][I - Pastebin.com wait wait, raft? are you on master? please checkout master and git pull, and make Hey everyone! Back after some busy weeks and just read the 2000+ message backlog. A few days back there was a discussion about finality, and having a fixed number of blocks after which blocks get finalized. I would just raise the fact that doing so has a big downside, which is "baking in" attack blocks. If an attacker can 51% attack with a long reorg then they can get that locked in and it cannot be undone, however much hashrate can be brought to bear on the chain. It's quite like MESS, which was introduced in ETC in 2020 following some periods of extremely low hash-rate which let a series of 3 51% attacks in a row to happen (using rented hash from nicehash). https://ecips.ethereumclassic.org/ECIPs/ecip-1100 Title: MESS (Modified Exponential Subjective Scoring) | Ethereum Classic Improvement Proposals MESS was removed just recently with ETC now the super-majority hash in Ethash class. These finality schemes (or the close-to-finality scheme of MESS) run the risk of "locking-in" long-standing chain splits and/or making it impossible to reorg your way out of a dicey situation. Like this: https://bitcoinmagazine.com/technical/bitcoin-network-shaken-by-blockchain-fork-1363144448 Title: bitcoinmagazine.com If you had a finality scheme with a relatively small number of blocks then darkfi could get into a situation where a bug causing a chain split would become part of consensus forever because it could not be troubleshot quickly enough before getting finalized. tried to build tau following instructions on https://darkrenaissance.github.io/darkfi/misc/tau.html but it only built taud, yet no errors. updated to latest master Title: tau - The DarkFi Book oh actually scrap it, the make install step seems to be necessary...(although I got make: *** No rule to make target 'install'. Stop., but it seems some dark magic happened or maybe I had aliased tau before...¯\_(ツ)_/¯ loopr: taud is enough, and don't install it unless you know what you're doing, to not be confused with versions and the cli is tau-python I should modify that doc rn dasman: if taud does not help to track tasks in distributed manner, and I would be able to track my own tasks with it, then I think I do not know what I am doing s/I\ would/I damn s/I\ would/I\ would\ not dasman, you mention tau-python, is that what we use to interact with tau? loopr: just run it from repo (./taud) SIN: yes https://pastebin.com/KpmVukv1 Title: git checkout masterAlready on 'master'Your branch is up to date with 'origin - Pastebin.com : @dasman pushed 1 commit to master: 4b28b79e5d: doc: update tau doc dasman: yo, after changing to the config file you linked earlier, it works (otherwise ./taud from repo doesn't connect, prints errors) loopr, what did you change in the config yes default config won't work, just copy the config file I linked into ~/.config/darkfi/taud_config.toml that latest pastebin I shared was with the config you shared here https://agorism.dev/uploads/taud_config.toml Yeah but you got this: wasm-strip: No such file or directory can't remember the right package, but try cargo install wasm-package, idk <0xhiro> gm everyone <0xhiro> just a quick qn again on tau, im not able to find the tau python file <0xhiro> https://pastebin.com/t6CXHJZF Title: :~/darkfi/bin/tau$ lsREADME.md tau-cli taud taud_config.toml~/darkfi/ - Pastebin.com greets 0xhiro: [~/src/darkfi]$ ls bin/tau/tau-python/ upgrayedd: ^ check messages from dark-john although i assume that monero merge mining protects against this? : @zero pushed 1 commit to smt2: 6f3fc65b67: smt2: add path : @zero pushed 1 commit to smt2: 2781b9af0a: smt2: cleanup and add missing docstrings : @zero pushed 1 commit to smt2: 93e0758e45: WIP refactor of SMT brawndo: how can i split an x: Value into 256 Value each corresponding to the binary digits of x? or should i instead use b: [Value; 256] and calculate x = b[0] + 2*b[1] + 2^2*b[2] + ...? just wondering whether we have sth like that already or not ah yeah we need to witness b i guess then enforce x == f(b) gm hanje-zoe: XMR merge mining does not protect against it hanje-zoe: Dunno, there are ways to decompose, but that'd be a very large gate ok thx hanje-zoe: Perhaps check for "decompose" in native_range_check.rs That goes bit-by-bit aha cool However in your example of calculating x, you're going 2^255 ? I don't think that fits in the field 253 bits fit into Fp good to know, although shouldn't effect the calc Sure just keep it in mind yep ty. native_range_check is a good ref, will work from this Yeah also it was already reviewed and fixed ;) So it should be correct nice i can split x into bits, witness the bits, and check x == f(bits) Yeah Remember to do copy constraints ++ Or just do everything within a single region ty i printed the code now to study See the difference between "witness_range_check" and "copy_range_check" https://github.com/narodnik/script/blob/master/md2pdf/codepdf ++ ty for the hint(s) yw having a gate with 253 + 1 advices would be ridiculous, since everytime you call next (or prev), it's multiplying by omega to 'shift' the polynomial. so it would be exponential number of multiplications so i should split the gate into a smaller 'window' (like with native range check) how/when should i use the table? how many values does it store? does it depend on the number of rows or the region? aha i can make it configurable and basically just copy what native_range_check does > witness some bits > move up the tree using those bits > repeat until root > check root matches... done The lookup table stores as many values as you want See 'fn load_k_table YT said we should be reusing the Sinsemillal table for everything though She'll fix it : @zero pushed 1 commit to smt2: 40484781e6: smt2: add get_leaf() fyi when SMT is added to money for nullifiers, it has near-zero overhead (we won't use membership/non-membership of SMT in money) https://codeberg.org/darkrenaissance/darkfi/src/branch/smt2/src/sdk/src/crypto/smt2/mod.rs#L205 Title: darkfi/src/sdk/src/crypto/smt2/mod.rs at smt2 - darkrenaissance/darkfi - Codeberg.org i left the distinction between leaf pos and leaf value, but in general the position should be calculated in a deterministic way from the leaf value (for example lookup table) What about the proof size and proving/verification time? for money it's unaffected, this only affects the DAO (the money contracts don't change, but i'm not including the overhead of the SMT gadget loaded by default into all circuits) ah for money, we just use .insert_batch(nullifiers), and to check nullifier doesn't exist yet, we check .get_leaf(nullifier) == Fp::ZERO aztec actually does this nullifier check inside ZK but we won't do that since things are fine as-is i just need it for the DAO since when voting/proposing, you reveal the nullifiers currently which doxes your coins Yep because in the case of voting you're not really spending the coins ok cool it makes sense yeah it's annoying and the workaround is that wallets have to move coins... but keep track of the old coins to be able to vote .etc which is complicated logic as well as expensive (paying fees to spend coins) actually i was looking recently at a DB system which can be rolled back to any point in time but which has efficient storage https://www.dolthub.com/blog/2024-03-03-prolly-trees/ Title: Prolly Trees | DoltHub Blog version controlled DBs <0xhiro> @hanje-zoe there isnt tau-python folder in the darkfi/bin/tau folder 0xhiro: are you on master? git pull https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/tau/tau-python Title: darkfi/bin/tau/tau-python at master - darkrenaissance/darkfi - Codeberg.org master branch, maybe you're using 0.4.1 for ircd, but tau is on master Cool, I'll check that draoi: https://mark-burgess-oslo-mb.medium.com/using-promise-theory-to-solve-the-distributed-consensus-problem-4cc2116f24e1 Title: Using Promise Theory to solve the distributed consensus problem | by Mark Burgess | Mar, 2024 | Medium <0xhiro> thanks after updating from git master <0xhiro> i tried running tau with the following err <0xhiro> https://pastebin.com/WSGZfcDZ Title: line 5, in from tabulate import tabulateModuleNotFoundError: No - Pastebin.com pip install -r requirements.txt you might need a python virtualenv hanje-zoe: as brawndo said, merge minning doesn't protect your from 51% attacks sure but doesn't that mean the attacker needs to 51% attack monero mining power? although we have to comment that such attacks are relevant mainly to external observers where they follow fork tips, since double spending is not a part of the chain hanje-zoe in a sense yeah, they "need" that the only "attack" I see right now is a hard fork attack where adversary minigs a fork privately and only publishes it when its the same height as the to-be-finalized fork that means that a network race can occurr, where miners choose to follow that new "unknown" fork instead of the previously known one hanje-zoe: No it's opportunistic hanje-zoe: Do you know how merge mining works? skoupidi: that attack you mentioned is called "selfish mining" fyi yy I know, just describing it so everone understands what we are talking about :D is merge-mining when we target the same difficulty as monero, and the miners must checkpoint our block hashes in monero blocks? depends on the setup, if you use same parameters as monero then yeah your target will eventually "converge" close to moneros No, it means that Monero miners can choose to include something (maybe a checkpoint) inside a Monero block, and we can choose to trust that if more work has been put into it instead of our native block but that means that they all hash for your chain aha great 10:32 depends on the setup, if you use same parameters as monero then yeah your target will eventually "converge" close to moneros This is false, it largely depends on the hashrate yeah so we can trust monero checkpoints if it's deep enough brawndo: skoupidi | but that means that they all hash for your chain Yeah it's a pipe dream ++ So it's not a realistic/true statement fare :D The target would be the same only if darkfi has the same params and the same hashrate as monero btw a friend explained to me about selfish mining, and told me the work calculation is a probability expressing the expectation of a block https://en.wikipedia.org/wiki/Expected_value Title: Expected value - Wikipedia so for that, if the ranking algo somewhat protects from the long reorgs we "should" be good apart from that we can bake in rules into the buffer i think the calc we use is fine but will check it with people. he suggested: max / (max - target) where if you receive an unknown fork, it must be max size n-2 (2 for synchronicity) ++ how will the divition be handled? we have to also take that into account we can do the btc trick to still use biguint, and just offset the floating points well our comparison is equivalent to that for a single block, but when summing they lose the equivalence so when i get time, i'll think what changes and what's the meaning of them both but yeah it's interesting thinking of it as the expectation/probability glhf i'd say that's more correct, but you possibly don't lose anything doing just (max - target) i'll ask around in next few months when we meet people. it's easy to change what do you think about the rejection rule? which one? i can't remember aka enforcing that everyone should only reorgs forks, up to certain height skoupidi | where if you receive an unknown fork, it must be max size n-2 well this is your area lol, but it seems correct what you say where N is the buffer(finality) size But as dark-john said that can lead to other issues like? 23:46 These finality schemes (or the close-to-finality scheme of MESS) run the risk of "locking-in" long-standing chain splits and/or making it impossible to reorg your way out of a dicey situation. well thats a problem of finality in general, splits can happen since you can't reorg what you consider as final : hello from mobile o/ Look mom, another Rust: https://gleam.run/ Title: Gleam i wonder if built in microthreads is the correct approach or not sometimes i would like to have the ability to schedule work more efficiently (so when calling .await to suspend coroutine, i can also mark its execution priority) would be hard to add that if the executor is baked into the language > god creates cpu > man creates software > software maxes out cpu > god creates threads > man creates multi-threaded software > software maxes out threads > god creates coroutines > man creates multi-process multi-threaded software > profit??? lol > man becomes transgender > transgender invents GLEAM XD hanje-zoe: do you have a min to talk about potential net simplifications? yes ofc essentially there are 2 things that come from the monero impl that i'm now questioning whether we need did you see that article i linked you? which one, where? https://mark-burgess-oslo-mb.medium.com/using-promise-theory-to-solve-the-distributed-consensus-problem-4cc2116f24e1 Title: Using Promise Theory to solve the distributed consensus problem | by Mark Burgess | Mar, 2024 | Medium when you get time, read this anyway cont... ty will do kk fyi the article is talking lock based approach vs using channels to achieve eventual consistency + a mechanism to see whether or not the pending tasks/work is completed lock based is non-ideal because it's a bottleneck, and eventual consistency has downside that program might be in an invalid state, but if you have a mechanism to see whether the state is consistent or not, you can get the advantages of both (possibly using a lock) 1. do we need an "anchorlist"? monero impl has a 3rd list (aside from white and grey) called "anchor". this is a list of nodes that we have actually established a connection to (in outbound or manual session). however in monero, the difference between whitelist and anchorlist is clearer since whitelist nodes aren't actually "connected to" (unlike in our case), they are just "pinged". whereas for us, we actually must establish a connection to a node in the refinery (called ping_node) in order for it to be promoted to whitelist. so whitelist and anchorlist are in some ways equivalent what if we change the whitelist to simply pinged not connected nodes like monero does? seems quite heavyweight doing a full version exchange, esp since the version messages will get bigger over time we discussed this before, but brawndo said we should keep as is, i don't remember why but can quickly dig up logs and doing it nonstop will consume a lot of bandwidth, which is bad on mobile It's good against sybil and later on we'll have swarms so you'll have to sort your peers per protocol draoi: so in monero, the anchor list is more like a platinum list? A peer can change its protocol and you should know it You'll also know when they upgrade their app, etc. yes hanje-zoe or "Gold" in our case brawndo: aren't you concerned doing it for 100 hosts on a mobile might be too much? the refinery rn is every 15s so it would be 1 host per 15s No ok well draoi you can merge anchor and whitelist ++ ok 2nd question rn we use a complicated address selection logic derived from monero <0xhiro> hi hanje-zoe i added seeds=["tcp+tls://dasman.xyz:23331"] in the taud config file 0xhiro: workspaces = ["darkfi-dev:2bCqQTd8BJgeUzH7JQELZxjQuWS8aCmXZ9C6w7ktNS1v"] there are two net settings white_connection_percent and anchor_connection_count, and when we select an address in outbound session it tries to select addresses according to these settings that's good i propose to make this more like the address selection logic in protocol address, which simply says: select from gold, if there's still space, select from white, if there's still space, select from grey ok as you prefer i think it's good to mix things up <0xhiro> im still getting Error: Connection Refused to '127.0.0.1:23330', Either because the daemon is down, is currently syncing or wrong url if you just select white/gold, you get sybil 0xhiro: rpc_listen="tcp://127.0.0.1:23330" draoi: you understand what i'm saying? <0xhiro> have that in the taud config file? yes no i don't follow the sybil risk lets say nodes A, B are white i always connect to these nodes node C is grey, i never connect to this node you see the issue? node C goes thru the refinery and becomes either white or gets deleted good to have some (smaller) % of grey thrown in <0xhiro> strange im still getting connection issue oh it gets deleted <0xhiro> rpc_listen="tcp://127.0.0.1:23330" <0xhiro> workspaces =["darkfi-dev:2bCqQTd8BJgeUzH7JQELZxjQuWS8aCmXZ9C6w7ktNS1v"] <0xhiro> seeds=["tcp+tls://dasman.xyz:23331"] draoi: could we have a "dark grey" list instead of deleting? <0xhiro> below is my config setting for the taud_config file just chuck them in there and forget 0xhiro: https://agorism.dev/uploads/taud_config.toml check your running processes chuck them in there and never connect to them again? or try to connect to them periodically? what if a node tries to give us the same peers again? just for this logic some small % like 15% or 25% of darkgrey nodes so rn we have a state called HostState::Suspended draoi: for example glowies have high availability servers because your algo much prefers them but random users have nodes with infrequent uptime circumvention of glow attack (sybil) depends on randomers and if we fail to connect to a node, it gets marked as Suspend, which means it should next go to the refinery what's Suspended? check 24212a6f774931ece83b815177c9723aeb50ad99 we're talking about 2 things here, right? yes 1. we want a decent number of stable reliable hosts so we can have a stable reliable p2p 2. we want to have a somewhat random connection pattern to avoid allowing external parties to shape *who* we connect to #2 is the sybil attack vector the most secure p2p net is one where all connections are fully randomized (the old behaviour we had) but we want to compromise on that slightly to gain #1 we try not to compromise too much, by only bucketing hosts into 2 lists (white and grey) ok i get you if a host is unreliable, they will get deleted from our grey list (^ via the refinery) but who is unreliable? random users disconnecting/connecting from say mobile so maaaybe (and if you decide i'm wrong that's fine) we instead of deleting them from the grey list, want to keep them around in a ring buffer (lets say 100 hosts) and maintain at least 1-2 outbound which is unreliable trying to connect to them so that way if the normal refinery is sybilled, at least we have an escape hatch with these random users to protect us draoi: does that make sense? idk it seems to undermine the point of the refinery 8 outbound, 6 are reliable, 2 are unreliable also, if we delete a peer, it doesn't mean it's gone forever, if that node becomes reachable it will be shared with us again how? through the protocol seed and the protocol addr, it will send its addr to other nodes on the network lets say there's a p2p network which is connected <0xhiro> @hanje-zoe im running taud via "./taud" command every node only connects periodically i send my addr around, i get upgraded onto the lists nodes also broadcast their greylists fyi so you don't need to be upgrayed to be shared that's bc of transports ok so im shared around still the node prefers whitelist trusted reliable conns so very few people connect to me until i've earned trust the network is vulnerable to sybil s/earned trust/ passed through the refinery yep i'm fine to keep the white_connect_count net flag but idk if we really need to introduce another list the refinery is concerned with reliability of the network <0xhiro> despite adding ur config file parameters <0xhiro> 10:42:28 [ERROR] taud: Please add at least one workspace to the config file. <0xhiro> Run `$ taud --generate` to generate new workspace. <0xhiro> im still getting this issue when i run ./taud 0xhiro: your config is not the one i shared draoi: here's another mental framework <0xhiro> i tried generating my own workspace and adding that param to the taud conf toml file but still to no avail you have a refinery process concerned with selecting reliable nodes you try to balance the concerns of having refinery be stable while not being insecure having explicit random/unreliable nodes mitigates that concern that's what the greylist is tho there's no right/wrong answer, you might decide otherwise for other reasons like: conceptual integrity, simplicity i'm not sure greylist is for that i'm saying we can keep the net flag due to concerns expressed, but i'm reluctant to add a darkgrey list greylist is nodes as they have been shared with us seems greylist is a prerequisite to get on the whitelist all nodes go to the greylist when we recv them when we start the node, all the nodes from our prev connections get loaded to greylist i.e. if they were on whitelist, they downgrade to grey again on start ok greylist is untrusted nodes anyway good to have some % of greylist ++ 0xhiro: the config path is ~/.config/darkfi/ not the taud directory but you can also use --config, see --help one thing about the gold/ anchorlist rn is that it does not get downgraded to grey when we restart the node, and we select from that list first to make a connection when we start the node that might be an argument for keeping the gold list, since otherwise we would be making 100% grey connections on start (before refinery kicks in) it seems useful to keep the distinction ++ i'm all for adding lists tbh since you have an easy abstraction over them for example instead of HostState::Suspended, maybe another list yeah perhaps Red or something hah not darkgrey plz XD brb we can name them after animals like Xenial Xerus or Mantic Minotaur XD <0xhiro> after running ./taud --config it generate a config file directly in the darkfi folder <0xhiro> must i move to to somewhere else ? you must learn to read --help <0xhiro> alright so i ran ./taud --workspaces <0xhiro> now all i need to do is wait for my system to sync up with the other nodes? i said to use --config, read the help! <0xhiro> just to confirm --help would describe to me the various options and flag right afk, bbl 0xhiro , this has recently been updated https://darkrenaissance.github.io/darkfi/misc/tau.html?highlight=tau#install Title: tau - The DarkFi Book you would use the command "./taud --help" to understand the syntax and different flag parameters and your taud_config.toml file should be a copy of this https://agorism.dev/uploads/taud_config.toml the taud_config.toml file lives here ~/.config/darkfi finally groked where upgrayed comes from, and upgrayedd is its daemon? lol > when we start the node, all the nodes from our prev connections get loaded to greylist could that lead to instabilities, e.g. when a lot of nodes upgrade their code? greets salut someone is going to do a workshop on nym locally here asked before, but I assume it is ok to promote darkfi there? not sure yet to what extent and what is going to happen there, so just asking first yeah plz do :D /query xeno do you want a grapefruit, kiwis are unripe... no other fruit lmao draoi: thanks found interesting rust project: cargo binstall just (or via os native package manager) https://github.com/casey/just maybe useful for small recipes ? Title: GitHub - casey/just: 🤖 Just a command runner upgrayedd: I was testing a5687f973bd that is from mar 8 as well, and was first from the tip yesterday that passed all tests and built dockers on x64, rocky linux added, same as almaLinux, just different base image running 39223c6a98a build and test on ARMv8 now ... results in ~ 2 hours ...so it failed faster than I thought:: ARMv8 :: tests::sync_blocks ... FAILED ; https://pastebin.com/uKpMe1u0 Title: 19:38:15 [INFO] [WASM] Successfully got metadata ContractID: BZHKGQ26bzmBithTQYT - Pastebin.com but I used patches to build on arm, so maybe the problem is there, or with a tad different rustc -V it worked fine on 2024/2/17 https://github.com/spital/darkfi/releases/tag/v0.4.1-9f1649ef0__2024-02-17 Title: Release v0.4.1-9f1649ef0__2024-02-17 · spital/darkfi · GitHub got this coming up on the tau dark-dev workspace https://pastebin.com/gMFCdBgt Title: Workspace: darkfi-dev ID Title - Pastebin.com looks to be working a : ruben added task (YffvEP): review tau usage. assigned to SIN : ruben stopped task (YffvEP): review tau usage hey, can anyone tell me what the process is to contribute? i see there's only a few pull requests on github and was curious if i can just make a pr from my fork cheese: https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html Title: Contribute - The DarkFi Book cheese: you need to know Rust to make code contributions, if you want to start learning I can send a few resources I'm using gm gm gm gm gm draoi: thanks for sharing, i took a look at that and didn't see anything about actually pushing code -- is that something discussed at the dev meeting? i've written a minor fix in my fork that i'd like to contribute yes you can make a PR okay cool, thanks p2p test is failing periodically, like every 3rd run or something, just fyi i am on it the DAO integration test is also failing for me draoi: in the pipelines if you run consecutive make test the second one will fail, prob because nodes in the test use same ports so for the second one they will connect to the first ones nodes and fail, something like that so its probably the ports i think it's something else but will look into : @draoi pushed 1 commit to master: a123826d22: net: cleanup and add documentation draoi: https://darkrenaissance.github.io/darkfi/dep/0001.html Title: DEP 0001: Version Message Info (accepted) - The DarkFi Book hanje-san: Do you need any help with the merkle tree stuff? i'm currently studying native range check, i'm moving through it very slowly. we could do a call where you walk me through it Keep reading, it will make sense :) currently i've done the first third of decompose, where you split the bitstream into windowed chunks, now looking at the bit where we loop over the chunks yeah mainly it's i keep having to lookup fns in halo2 and stuff should i write my proposed algo and show to you, or just attempt to write the code? or should i write an unoptimized but working version? then later we optimize it? Depends if you want to learn halo2 or not i'll try to do it, i'm up to a challenge :) good for the bus factor of the project too Here to help Yep our overall bus factor is quite high actually so indeed the p2p test never fails when it's ran alone (even when ran consecutively), so seems to be triggered by running multiple tests... haven't rly understood why tho draoi: its the ports :D ACTION is speculating based on lower dimension reality Yeah just ask the kernel for a free port in tests draoi: btw you might like tc for debugging p2p, simulating unstable internet .etc $ tc qdisc add dev NETWORK_INTERFACE root netem delay 50ms 20ms distribution normal $ tc qdisc change dev NETWORK_INTERFACE root netem reorder 0.02 duplicate 0.05 corrupt 0.01 ah amazin ++ https://www.youtube.com/watch?v=qvUdepG8Uiw Title: Jesse Lee Peterson DESTROYS Liberal Beta Male - YouTube https://www.youtube.com/watch?v=n__GJuqLb00 Title: You're a beta male, Sonic. - YouTube hey, i'm trying to setup tor with git. im doing the ssh config step and when i run `ssh -T git@codeberg-tor -vvv` i get an error that the flags `--proxy` and `--proxy-type` dont exist. im running this on gnu netcat. any suggestions on how to solve? > You will need BSD netcat installed do u know of a way to get bsd netcat on mac? seems to be tied to linux cheeese: iirc macos uses bsd netcat as default why didn't you just google that instead of asking? maybe using a mac has made you lazy no need for insults. before i asked in here i googled it, played around with the different flags in nc to try to get it working to no avail, then asked here whoever wrote the documentation about gnu netcat knows something i dont, i was hoping to get some more help from them. maybe i need to dualboot linux, idk no you didn't, i literally googled "bsd netcat" and there's a ton of stackoverflow pages saying the same thing "bsd netcat mac osx" https://agorism.dev/uploads/screenshot-1710420078.png and the guide literally says use "BSD netcat" Why not just use portable nc flags what work with every netcat? nc -x 127.0.0.1:9050 %h %p Host codeberg.org User git ProxyCommand nc -x 127.0.0.1:9050 %h %p In ~/.ssh/config brawndo: thanks for your response, playing around with this now. If the netcat you' re using doesn't support it, then change it okay, thanks is it correct that the compiler reports constants as unused even if they are used inside the implementation of a trait? Is that a smell and should be designed differently? It also reports the trait as being unused obviously, although it's being used in the test code can be checked on codeberg, updated the PR afk brawndo -x is BSD netcat, not GNU BSD netcat is considered normal, but for some reason everyone always installs GNU netcat despite the guide says to use BSD netcat oh god that still exists loopr: did you ever check the wif code? i figured it out my netcat issues after playing around with the symlinks of netcat. in the guide, it says "Optionally you could use GNU netcat, but the flags are different..." which i think should either be removed if not supported or updated, as i had some trouble with it hey, i made a tiny pr to get my feet wet. i'd appreciate any feedback https://codeberg.org/darkrenaissance/darkfi/pulls/251 Title: #251 - zkas/parser: add check for function to be a symbol - darkrenaissance/darkfi - Codeberg.org : @draoi pushed 1 commit to master: ff6b2a3962: net: don't write test files to harddisk testing net stuff so comms are unreliable until further notice : @draoi pushed 1 commit to master: 00582c94a7: chore: remove deceptive comment in lilith whitelist refinery gm : @cheeese pushed 1 commit to master: 0aad2dfb06: zkas/parser: add check for function to be a symbol : @holisticode pushed 2 commits to master: 6b1f75b0e8: fix link to dev section : @holisticode pushed 2 commits to master: bd456cb839: chore: fix some typos... : @parazyd pushed 1 commit to master: e607236337: Makefile: Remove PROOFS_BIN on make clean... assert!(WINDOW_SIZE * NUM_WINDOWS < NUM_BITS + WINDOW_SIZE); do we also need a check that NUM_BITS <= WINDOW_SIZE * NUM_WINDOWS ? if so, then in darkfi/src/zk/gadget/native_range_check.rs:229 region.constrain_constant(z_values.last().unwrap().cell(), pallas::Base::zero())?; surely this check is only valid for some window sizes, but not all? cheese: please use make clippy and make fmt before commiting you can fmt as a commit prehook https://darkrenaissance.github.io/darkfi/dev/dev.html#cargo-fmt-pre-commit-hook Title: Development - The DarkFi Book hanje-san: that needs to get updated to use +nightly stable and nightly fmt produce diff results some times and the message should be 'Run make fmt to fix it' ok well feel free to change anything you want im on a WIP branch rn : @skoupidi pushed 1 commit to master: b2d2c7a4e8: chore: fmt brawndo: i get a panic with native_range_check unit tests cargo test --all-features zk::gadget::native_range_check::tests:: oh wait the code has a comment saying the command to run ok it works nvm ;) About your other question, YT is gonna remove NUM_WINDOWS I think Will have to wait on that ok upgrayedd: will do, thanks for letting me know : @skoupidi pushed 1 commit to master: 98ec24720c: chore: fmt : @zero pushed 1 commit to smt2: fbbd9c5b2e: WIP refactor of SMT : @zero pushed 1 commit to master: 907f2f6e7f: zk/native_range_check: add code comments #![feature(generic_const_exprs)] This should be removed and worked around the issue is that path has length N-1 while the empty_hashes is length N so one way is having a single unused value in the array Yeah or something That feature is never gonna be stabilised At least not in any near future ok understood : @parazyd pushed 1 commit to master: 1c163c546b: darkirc: Preliminary services implementation gm : @zero pushed 1 commit to smt2: 030d532222: smt2: replace generic_const_exprs unstable rust feature with temp workaround that can easily be changed once the feature exists in rust proper https://learn.0xparc.org/materials/halo2/learning-group-1/cost-model Title: PLONK Cost Model | ZK Learning Resources oh wait i see Tom Waits there : @draoi pushed 3 commits to master: 0f5fcba2c2: net: add `last_connection` timer to more empirically track connection status... : @draoi pushed 3 commits to master: 9bcb691c65: contrib: add dchatd localnet for minimal net testing : @draoi pushed 3 commits to master: d160e96161: net: gather unused channel_subscriber's into a single channel_subscriber in Hosts... brawndo: i just computed in python len(bin(...) - 1) - 2 for p and q, and it says 255, so it seems the maximum value for Fp/Fq is 255 bits the -2 is because bin(x) gives you "0b..." 0x40000000000000000000000000000000224698fc094cf91b992d30ed00000001 It will fit 0x3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF And so on That's 253 iirc 254 bits >>> hex(2**254) '0x4000000000000000000000000000000000000000000000000000000000000000' q: if adding a personal task to tau, it will go to the shared workspace I guess. so prob best to create a personal workspace for personal tasks (guess via config)? is there even anything such as a "private" workspace in tau? or is this not the idea to use for personal? otherwise will switch back to taskwarrior (never used it in distributed config accessible from multiple devices tho) updated and rebased https://codeberg.org/darkrenaissance/darkfi/pulls/250 Title: #250 - WIP: First iteration of task 17, WIF formatting - darkrenaissance/darkfi - Codeberg.org loopr: you can generate your own workspace: taud --generate but you should do local deployment for your personal tasks https://darkrenaissance.github.io/darkfi/misc/tau.html#local-deployment Title: tau - The DarkFi Book skipping dag sync, and have localhost as inbound without seeds, should do you can even run taud as a service, but that's up to you and your system greets gm brawndo: that hex you showed is smaller than p - 1 (max size for Fp) https://agorism.dev/uploads/pallas-vesta-size.txt loopr: it's fine to create other workspaces brawndo: does offset in .assign_advice() / selector.enable() refer to the row, whereas meta.advice_column() is a new column? Rotation::cur(), Rotation::next() .etc refers to rows as well (like offset but we don't know the exact offset in a gate) (row within a region) hanje-san: That hex is the maximum you can show Maximum 255+ bits does not fit in So you shouldn't assume you can fit 255+ bits inside of an Fp 07:50 brawndo: does offset in .assign_advice() / selector.enable() refer to the row, whereas meta.advice_column() is a new column? meta.advice_column() creates a new column, correct The former two you do within regions Rotation is also within the region, yes https://docs.rs/halo2_proofs/latest/halo2_proofs/poly/struct.Rotation.html Title: Rotation in halo2_proofs::poly - Rust It's relative to where it's being called, not absolute offset yeah i know max(Fp) = p - 1 < 2^255 - 1, but what i mean is that max(Fp) > 2^254 - 1 (so you need exactly 255 bits to represent it) so the offset is row-wise, though, right? so what they call offset means row offset This one is big, but has a lot of usage of Rotation: https://github.com/zcash/orchard/blob/main/src/circuit/note_commit.rs aha ty Yes that's correct https://github.com/zcash/orchard/blob/main/src/circuit/note_commit.rs#L698-L703 https://github.com/zcash/orchard/blob/main/src/circuit/note_commit.rs#L733-L737 See this for example So you see it's querying b_1 from the next row the weird thing in the merkle tree path hashing, is that eventhough poseidon is an algebraic hash, i treat it more like a one way function rather than an algebraic expression so the only way i see to do this is witnessing all the bits of the position and sinsemilla is a partial hash so they can split it up and use lookup tables, but i can't do that https://gh.mlsub.net/DrPeterVanNostrand/halo2-merkle Title: GitHub - DrPeterVanNostrand/halo2-merkle: Halo2 Merkle tree circuits Maybe this is helpful (didn't read) so after reading sinsemilla code and native range check, i will witness the bits for pos and do it that way... ofc i won't use 255 columns ok thanks, will study that https://gh.mlsub.net/teddav/tornado-halo2/blob/main/src/chips/merkle.rs yeah this is the same as what i want to do This is just from a quick search, I don't know how useful they are gtg now, will be back later today ok cya! i'm good thanks a lot loopr: get rid of these useless constants const MAINNET_PREFIX: &[u8] = "80".as_bytes(); remove the enum, instead the trait will have a method called get_wif_prefix() -> u8 (or similar), which the types themselves implement checksum_ok() <-- rename this to verify_checksum() fn from_wif(&self) -> Result; <-- do not use String to represent bytes in rust, it's Vec or [u8; N] also it should be : fn from_wif(&self) -> Result, then use Decoder::decode() (see src/serial/) assert_eq!(decoded, "0C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D"); also wtf we said not to use hex strings already, i already said this multiple times that's not decoded bytes (except in javascript land) remove this crate! arrayref = "0.3.7" also your checksum is not 4 bytes, it is 2 bytes brb b 7b yoyo : @zero pushed 1 commit to smt2: 85d310ec92: prelim SMT2 ZK gadget brawndo: ^ going to replace SMT then merge this with master, altho will wait if you prefer to check it first (still will do some cleanup so no rush) also in smt why do you create new IsEqual, ConditionalSelect .etc chips instead of the user just passing them in? AFK for next few hours, will check back in re grants after. looking forward to talking:) hanje-san: Sure feel free to just merge The chips are configured through PathChip, but you can see they reuse the columns given to the pathchip You can make them externally as well, but that's not entirely sensible since it's going to be trivial to make a mistake wrt. layout : @draoi pushed 5 commits to master: b1cfbda94b: hosts: remove useless Result<> on subscribe_store : @draoi pushed 5 commits to master: 137eef25b7: doc: fmt and add dev NOTEs : @draoi pushed 5 commits to master: b9edcc6077: channel: expand ChannelInfo to include resolve_addr and connect_addr... : @draoi pushed 5 commits to master: b7c11c2bed: net: implement DEP-0001... : @draoi pushed 5 commits to master: 5d9c465b3e: doc: update DEP-0001 aha ty brawndo where can i find out about seeing the circuit layout picture, and seeing stats like number of rows used? i'm using the MockProver but don't see anything in the API. iirc there was something in the code generating a png before is it this one in darkfi/src/zk/gadget/arithmetic.rs:315 let root = BitMapBackend::new("target/arithmetic_circuit_layout.png", (3840, 2160)) hanje-zoe: a hex string was only used for the test, it doesn't matter for tests really as long as the conversion checks out, but I can use smth else then let decoded = wif.from_wif().unwrap(); assert_eq!(decoded, "0C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D"); how is that just being used in the test? it looks like from_wif() returns a hexstring https://codeberg.org/darkrenaissance/darkfi/pulls/250/files, L137? that's inside `fn test_wif_to_str() {` for me loopr: the trait should be implemented for bytes array, not String yeah I got that from the earlier feedback, so obviously that test won won't work that way anymore fn from_wif(&self) -> Result this is wrong your are implementing for a generic T and you are returning a String yep hanje-zoe already mentioned did you check the DM I send you? hanje-san | also it should be : fn from_wif(&self) -> Result, then use Decoder::decode() don't think I got that, my pi server was disconnected due to provider breakdown all of Friday did you get them now? yes btw trait name should be just Wif(or WIF) not Formatter then you just implement the trait for a bytes array if you want to impl for String, then you can directly reuse the bytes array impl, by doing self.as_bytes().to_wif, self.as_byteS().from_wif, etc. etc. but its not required, since caller can call the trait fn directly from the strings bytes hanje-zoe: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/tests/dyn_circuit.rs#L118-L121 Title: darkfi/tests/dyn_circuit.rs at master - darkrenaissance/darkfi - Codeberg.org excellent ty np hanje-san: ah ok no rush, when/if this interests you just tag me here and i can guide you I could start working on those hardcoded generator points what about that pull req? shouldn't you finish that first gm gm pub const FP_HEIGHT: usize = 255; pub type SmtMemoryFp = SparseMerkleTree; .etc ^ should this stuff go in src/sdk/src/crypto/constants.rs or in src/sdk/src/crypto/smt/mod.rs? it seems weird to put SMT_FP_DEPTH (just realized FP_HEIGHT is wrong name) in constants, but the type aliases in smt i'll put it for now in smt, and move if it should go in constants !list No topics why does zk/vm.rs do condselect_chip.as_ref().unwrap() instead of just doing let condselect_chip = config.condselect_chip().expect("condselect chip"); at the start? The assignment happens before The latter is just referencing it, that's how rust works : @zero pushed 6 commits to master: fbbd9c5b2e: WIP refactor of SMT : @zero pushed 6 commits to master: 030d532222: smt2: replace generic_const_exprs unstable rust feature with temp workaround that can easily be changed once the feature exists in rust proper : @zero pushed 6 commits to master: e97ade3c9d: SMT2 ZK gadget : @zero pushed 6 commits to master: ba60fc05f3: switch zkVM to new SMT gadget : @zero pushed 6 commits to master: 11e39f07cf: mv smt2 smt : @zero pushed 6 commits to master: df1f9e744b: Merge branch 'smt2' merged \o/ brawndo: yeah but i'm saying instead of calling .unwrap() in each opcode, and using .as_ref().unwrap(), i think it can be done just once when the chip is initialized \o/ It's fine tbh The issue IIRC is because it's being done in a loop Why didn't you rebase before merging? Also didn't run the linter And the commit messages don't follow our standard practice :D i did rebase the commits, but kept them split up enough so the changes make sense true i should put smt: prefix before all messages df1f9e744bd1ae5fddb2c5abf04b0629b19aa792 The merge commit :( You should rebase so you can fast-forward when you merge to master ahhh ok good to know i thought we merge branches (my first branch in this project) Prefer not to, it's really bad to navigate 🫡 IMO better to just have linear history ok true, will remember next time nbd I'll review this when I finish some darkirc stuff Thanks https://agorism.dev/uploads/smt.png Are the tests working? yep oh jesus cargo test zkvm_smt --all-features So k=13 doesn't work anymore it does if you don't use SMT Sure otherwise for SMT you need k=14 cargo test --release --all-features --lib smt -- --nocapture Yep that's why I'm saying We did add configurable k to zkas though ;) (The field is still just a placeholder) ACTION wipes away the sweat ^ that last test is in SDK https://a0.anyrgb.com/pngimg/1236/1996/excessive-sweating-hyperthyroidism-bagi-axilla-perspiration-sweat-guy-towel-know-your-meme-internet-meme.png *nod* ZK gadget test: cargo test --all-features zk::gadget::smt:: that's 3 tests, one in tests/ for zkas/zkvm, one in SDK for SMT, and one for the zk gadget Nice now i will add sparse_merkle_add() alongside merkle_add() in the runtime, and make money nullifiers use that. Also will add SMT::get_leaf() to runtime, and use that for checking if nullifiers exist. then will finish DAO changes after this ok so how would spending Money coins work now? The same, they just get added to the Merkle tree rather than just a set? the same as before. the only change is that when doing process() in wasm, you just call SMT::get_leaf() (using a WasmDb backend for the SMT) instead of db_contains_key() as we currently do Sounds good :) and secondly for update() we use insert_batch() which is slightly more expensive but money isn't really using the SMT, just making it available for other contracts *nod* It's great for the DAO voting indeed yeah or any ZK proof where you want to check a coin is valid (you need to see it's unspent otherwise it could be a spent coin) altho as you saw, the SMT proof is expensive but i can't really see how to get around needing to do 256 hashes and 256 conditional selects *255 Yeah dunno Could ask strad or Daira Or if you know anyone from PSE or Scroll anyway its ok for now I suppose they're using SMTs for Efferium i only know arnau, idk if he's with them still !list No topics !topic updates Added topic: updates (by brawndo) should i take &mut SledDbOverlay or just clone the handle? &mut ++ But there's SledDbOverlayPtr &mut is fine here ty No It's not ;) You should use the type pub type SledDbOverlayPtr = Arc>; it's just a small block of code where the overlay is locked(), then unlocked (like in merkle.rs) Please use the type You also have db handles in the runtime i mean here darkfi/src/runtime/import/merkle.rs:258 It depends on what you want to do, but I seriously doubt you'd use the SledDbOverlay directly Yeah so you'd get the entire blockchain db like that on line 257 And then lock the overlay and use it it's just for the duration of a single operation .insert_batch(), if we have to repeatedly lock()/unlock() while doing a bunch of .put(), it seems excessive this operation darkfi/src/sdk/src/crypto/smt/mod.rs:155 So hold the lock during the entire operation ok yep i plan to drop it right after Cool in runtime/import/merkle.rs, we read the current tree (line 168), modify the tree, then write it back (line 259). each time we lock/unlock. It's probably safe since we aren't writing async with contracts, but shouldn't we lock it for the entire duration? Yes you're correct ++ will change this after then ty gm hi <**DMaboutGrants**> hi folks Hi : @zero pushed 1 commit to master: c5166445d7: smt: get it working with the WASM, and add it to money contract for nullifiers. Summary of changes:... \o/ :D Nice !list Topics: 1. updates (by brawndo) : @zero pushed 1 commit to master: 66b44abc78: runtime: lock sled overlay for the entire duration of merkle::merkle_add() [safety rzns] gm hello o/ Hello !start Meeting started Topics: 1. updates (by brawndo) Current topic: updates (by brawndo) hihi yo upgrayedd might be afk today Shall we just do a round of updates? ACTION found some bugs in the net code this am :D working on a fix, pretty straightforward, after this inshallah will be ready for review Nice brawndo did a prelim SMT impl but it needed some small work/changes, and then a bunch of scaffolding for integration ACTION working on darkirc services, so we can start registering anon accounts and work our way to RLN spam protection when needed i managed to make the PathChip not take a path during construction so in the VM we don't need to store the chip on the stack which was weird, and instead can use Value<[Fp; 255]> like with merkle tree ACTION working on tau tasks, for read-only key, I implemented a very no-brainer, having an encrypted password, and a key, anyone who can dycrept that password would have read and write access over tasks in tau, other wise you're an observer with read-only access This is also opening up a good path to controlling the ircd through the IRC protocol fully * working on the WIF implementation, finalizing on that one then I am free to work on the generator constants hanje-san: Thanks a lot for those improvements nice, what does it mean registering accounts? you mean staking some DRK using darkfid? Yeah currently I'll just be doing keypair mgmt, and later on we will connect it to some contract also just now i updated the money contract, so will finish the DAO, then there's no more crypto changes and we finished... i'll move onto updating drk cli tool brawndo: can you share some info on services or link some code snippet to look at? The idea is to have some kind of "service" (think ChanServ and NickServ) in the IRC client so you can do all your ops within the app ahhh cool nice nice, can i do DAO stuff that way Sweet! https://github.com/darkrenaissance/darkfi/blob/master/bin/darkirc/src/irc/services/nickserv.rs I pushed some scaffolding here But have dirty local work I need to still clean and architecture better cool did you see the telegram app store? i think i showed it to you Yes :) nice also i made a weechat ticket, did you see lol https://github.com/weechat/weechat/issues/2096 One thing that is unclear to me is how to make the ircd aware of the blockchain state Title: IRCv3 server-time messages delivered out of order are appended to the buffer which breaks IRC as a p2p protocol · Issue #2096 · weechat/weechat · GitHub ooh nice Will check that for sure seems they want to add editing the buffer for bridges like matrix where messages are deleted by ops .etc which would be sweet, we could add editing messages and also optional channel moderation Cool they didn't say no yeah im talking with the guy on libera, i got weechat compiled to play/poke around in :) i like the idea of having an array in your config file with ops = [key1, key2, ...] and then any of those keys can do ops stuff like delete or edit msg but it's optional mhm Though delete/edit in a p2p context means something else Nothing can really be deleted yeah it's just another message, altho i don't know how unlinkable messages could be edited or deleted yet ah yeah nvm it's easy The edit/delete is linked :) yep just that there's a future for IRC Obligatory: https://xkcd.com/1782/ Title: xkcd: Team Chat IRCv3 = web3 IRC? !end Elapsed time: 12.3 min Meeting ended nowebpls short-n-sweet hanje-san: If you have any wishlist for the IRC services, lmk your weekly mandatory social ACTION hands out 10 GBTs brawndo: i didn't want to say since it might be feature creep, but would be cool to have optional bots as plugins, like the meetbot p2p bots but it's not necessary, we got more important tasks Something you run alongside your daemon? Might be better to research this in the drk wallet context first yeah could be python scripts run by the daemon sure Yeah I think we'll learn better how to sandbox it when doing it for the wallet Then we can apply that knowledge to IRC makes sense ok thanks for the chat !end Elapsed time: 28512918.2 min Meeting ended thanks everyone o/ ty all ty all : @zero pushed 1 commit to master: fb4a521f70: zk/smt: fix broken unit test can't build tests, known issue? my env? error: couldn't read tests/../proof/smt.zk.bin: No such file or directory (os error 2) --> tests/smt.rs:36:19 36 | let bincode = include_bytes!("../proof/smt.zk.bin"); should it just be "../proof/smt.zk"? at least that makes them pass for me loopr: no tests pass after fb4a521f7042888c811f782cf1e21a16f097bdd7 so probably env issue did you pull? I actually just rebased well before lunch lemme check fba4 is in my history (after the rebase) but it still would fail but ok wondering what's wrong then, I'll check do you guys push directly to master? loopr when you run make test, does it build the smt proof? aha make test does Makefile is there for a reason, use it ;) I know you said that before. Can make test run single tests though? you can build the proofs using the make target and then run the test individually using cargo they should have been build by the make test you just run, so single test run should work now (assuming you let it build them) OK, I assume make test is only due once after pull (or after I'd add something to it), then cargo is fine for my single test proofs have their own target, which test target uses in the Makefile (referring to Makefile targets when saying targets) kdoke hi all gonna be working on the atomic swap stuff :) my PR #246 is ready for review noot: hey and welcome first things first, can we ditch all the foundry bloat? second: ditch extra crates like async-watch, eyre and tracing which part in particular? like all of foundry? what do you use for logging instead of tracing? yeah, hardhat is much less bloat and doesn't require affecting root repo config we can make the ethereum contract stuff all live under that subfolder even trigger stuff from the bin/swapd makefile i use anvil in the tests for the atomic swap impl - even if i get rid of the foundry stuff, there will still be changes to the root config because of that anvil is part of foundry so that might be what you're seeing there can you point me to the tests? https://github.com/darkrenaissance/darkfi/blob/282aeb043391af419beb21848380c9f825baff56/bin/swapd/src/ethereum/test_utils.rs which is used here https://github.com/darkrenaissance/darkfi/blob/282aeb043391af419beb21848380c9f825baff56/bin/swapd/src/protocol/initiator.rs#L252 i would argue hardhat is more bloat as it requires having nodejs, i chose foundry for this as it doesn't require additional deps sure, we are not discussing foundry vs hardhat, the point is to have everything ethereum related under a single folder, without affecting rest codebase, especially root it's only affecting Cargo.lock right? is there a way to have it not affect that? without just moving it into a separate repo it also affects git submodules i can copy paste the contract dependencies into the ethereum folder and remove submodules if that's preferred does the repo have other git submodules? nope yeah using just the contracts that needed are much better than pulling the entire repos ok i'll change that thats not trivial right now you should start with ditching the extra deps namely: asnyc-watch, eyre and tracing what should i use for logging? and instead of eyre? do you use typed errors for everything? do you have something already implemented i can use instead of async-watch? check swwapd/src/main.rs we use log for logging for erros, you should create an error.rs file and create the corresponding error enums you need like for example in bin/darkfid/src/error.rs for channels we use smol there are a lot of example usages through the codebase, for example check bin/darkfid/src/task/miner.rs ok i can use thiserror for typed errors? where we create a channel so threas can communicate i also see you have anyhow in the repo, can i use that instead of eyre anyhow is used for cli tools daemons should have their own errors define and/or use the main src/error.rs ones anyhow should also get the boot where is used, so better to not use it in the first place okay cool, i'll update that btw anyhow is used in research stuff, not main repo for async-watch - i specifically wanted to use a spmc channel with the features that watch has, if i remove it i'll basically just end up rolling my own async-watch, which seems redundant what's the reasoning for wanting it removed? https://docs.rs/smol/latest/smol/channel/ Title: smol::channel - Rust smol is spmc i'd rather keep it than have to just re-implement it, there's only two files in async-watch anyways yes but smol channel only allows one receiver to receive the value i want to be able to watch for swap state and changes from multiple places well why not just open a bounded channel for each one? like a channel for each watcher? yeah since you have a single producer they can handle updating all the watchers which I guess you already know who is supposed to listen for updates are the watchers dynamic? yeah they're dynamic right now, and will have to be for the rpc cli are these rpc calls one way? I mean the cli perfors the request and waits till it receives a response its not like a subscription correct? in the existing swap impl the rpc has subscriptions, it's not strictly necessary but makes life easier for e2e testing nw we can work with that so the daemon keeps a set of open subscriptions(aka channel senders) and simply pushes the update to each of them then you receive the rpc request to subscribe, you create a channel and push its sender to the daemon set, and use the receiver for the "waiting" logic using watch is simpler because you just clone the watch and wait on changed(), don't need to push a sender to the daemon using channels, the rpc module will need a reference to the daemon as well (or some way to push the channel to it) the point is to minimize external deps, and in this case we are discussing about a dep that does pretty much what an already existing one does. The extra code handling is much preffered imo that importing it rpc module can have the reference to the set no problem the rpc handler is a trait, that is implemented for the daemon main struct Swapd so you can add the set there, and rpc already has the reference, since is a field of its self hmm okay is there a criteria for when new deps can be added? like how is the decision made on what to add just curious cause i'm new i also noticed there's deps in the root Cargo.toml but the crates within the repo that use those don't use workspace=true, is that for a reason? re: criteria: since we want to minimize external deps, we add stuff we absolutelly need and doesn't make sense to impl ourself can you give an example for the Cargo.toml deps? basically everything in here https://github.com/darkrenaissance/darkfi/blob/master/Cargo.toml#L53 is there a reason they're defined here? ah nevermind, everything in src uses them lol lol yeah I was waiting for you to see it nw i was looking at the Cargo.tomls in bin/ lol haha yy gotcha I was like: 1) What lmaooo my bad getting used to the repo structure so everything in src/ is one big crate right? pretty much yeah, src defines the lib features i see and bin/ are the daemons/applications we build using them if you see in all bin/*/Cargo.toml darkfi is the main dependency, along with the features we need from it i see, it's split by feature, nice unrelated, but i get compile errors on the toolchain in rust-toolchain.toml :/ when i switch to latest nightly it's fine you are supposed to use latest nightly oh okay rust-toolchain.toml uses nightly btw you can start using the root Makefile omg nevermind iirc swapd is already in there and you can use stuff like make fmt and make clippy fmt to properly format the code to our spec i think i had an old version of rust-toolchain.toml or something (or one i updated previously lol) and clippy for linting etc lol cool, will do, thanks ok so I assume you will start with the small stuff(-tracing, -eyre) and then go to the smol stuff right? ACTION is not sorry for the pun haha yes exactly so we're set on no async-watch? it's less than 300 lines with comments and everything is it just that it's a dependency at all which is bad? yeah but we can achieve the same with with like 6 lines(with comments) hmm okay i'll try changing it and see its because its redudant, as smol is already there and yeah, the less external deps the better can you also drop async_std in dev-deps? especially the tokio ref XD I guess that you are using it to run async tests but guess what, smol already does that: https://github.com/darkrenaissance/darkfi/blob/master/bin/darkfid/src/tests/forks.rs#L27 just tried with smol::block_on and the tests fail ---- protocol::initiator::test::test_initiator_swap_success stdout ---- thread 'protocol::initiator::test::test_initiator_swap_success' panicked at /home/e/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/net/addr.rs:182:48: there is no reactor running, must be called from the context of a Tokio 1.x runtime this is coming from ethers-rs i believe requires tokio :/ let me check backtrace shows ethers-providers-2.0.13/src/rpc/transports/ws/backend.rs:68:41 well well well This crate is in the process of being deprecated. See #2667 for more information. https://github.com/gakonst/ethers-rs/issues/2667 Title: ethers-rs is being deprecated · Issue #2667 · gakonst/ethers-rs · GitHub another one takes the boot then :D yeah the replacement isnt ready yet though afaik https://github.com/alloy-rs/alloy Title: GitHub - alloy-rs/alloy: Transports, Middleware, and Networks for the Alloy project > ethers-rs will continue to be maintained until we have achieved feature-parity in Alloy. No action is currently needed from devs. yeah alloy isnt ready if the parts we use right now are working, why not change to it from the getgo? to alloy? I guess it's not ready > (Soon) alloy: Rust interface to Ethereum and other EVM-based chains. --> (Soon) middleware: Alloy Middleware for overriding default interactions with a chain. (Soon) chains: Canonical type definitions for EVM-based chains. from the github org description these are the parst i need I also see a hard dependency on tokio.... doesn't look like they have anything ready apart from types and serialization why are eth devs like that? yeah lol ok I saw we start with the other deps first and we see about that later if you recall we haven't touched on actuall code yet XD lolll feel free if you want to start taking a look at the code will do, by tomorrow (TM) thanks :) will work on the dep updates in the meantime yoyo noot: check darkfi/src/system/subscriber.rs:36 this implements pub-sub channels with multiple receivers also check darkfi/src/system/condvar.rs (sometimes useful for signalling between tasks) if you need something non-async then we can implement it using lockless queues gm : @zero pushed 1 commit to master: 6bee5bf416: subscriber: add docstrings, and simplify a method. gm : @parazyd pushed 1 commit to master: ec5984685b: chore: Add missing license headers gm XD oops : @zero pushed 1 commit to master: 309157e0ba: runtime/merkle: return early with SUCCESS (but give a warning) if the coins list for changing the tree is empty. We also don't do any gas calc since nothing on disk was modified. oh nice, cargo +nightly fmt will now change `use xxx::{yyy};` to `use xxx::yyy;` : @zero pushed 1 commit to master: fd1d154b15: runtime/smt: return early with SUCCESS (but give a warning) if the nullifiers list for changing the tree is empty. Hey when reviewing these functions Make sure there's no way to do an infinite loop e.g. by constantly calling the function with no args so it returns with no gas use increase Every call should use at least some gas to mitigate infinite loops hanje-san: ^ hey : @zero pushed 1 commit to master: cde5f7cea2: runtime/merkle: simplify the function. we do not need to store the intermediate roots for the tree since each update is atomic. Also the serialization length == 32 should be an assert rather than a conditional check. brawndo: aha good point ty. will make sure of that. fyi merkle has a small .subtract_gas() call at the start as well but i didn't know why it was there... good to know i split the commits so they're easier for you to review ty can we delete pub(crate) fn get_verifying_block_height_epoch(mut ctx: FunctionEnvMut) -> u64 { darkfi/src/runtime/import/util.rs:230 because it is synonymous with darkfi_sdk::blockchain::block_epoch(get_verifying_block_height()) upgrayedd: ^ : @zero pushed 1 commit to master: 010ea6037d: runtime/merkle: db_roots store key=blockhash, value=blockheight (before value=[]) ^ this is to keep track of state changes. Ideally our DB itself would be a time series one, but we don't have that luxury, so I just added the blockheight as a value alongside the merkle root hashes we store. oh wait damn blockheight is not granular enough actually is there a single incrementing value i can use? something corresponding to the tuple (blockheight, transaction index, call index) Why do you need it at all? so we always have this issue in ZK of generating proofs for some state, but the blockchain is dynamic (I cannot guarantee when/where my tx will execute in the blockchain). The way we avoid this problem is by having some algorithm which picks a period discrete state. see darkfi/src/contract/dao/src/lib.rs:98 (because we cannot use the exact blocktime, we instead use 4 hour windows. if your tx doesn't get confirmed within that window, you must remake it and try again- possibly with a higher fee) the idea with this triple, is that the contract can choose exactly which merkle root we must use, otherwise if the user can supply one, they might give you one that is advantageous to themselves somehow You should just use the root at the time of proposal creation ideally the merkle root picked is done algorithmically, and you must construct your ZK proof for exactly that one (not the merkle tree +1 or -2 .etc from that interval) The wallet should maintain such a tree and be able to copy it at the time it recognises a proposal tx Then for each proposal you'll have the Merkle tree it's expecting lets take proposal creation as an example then. I supply merkle roots both for coins and nullifiers. what if I pick a really old one? so then you need to at least check it's somewhat recent Why would you supply anything? The contract should handle that At time of execution to make the proposal, I have to use my coins to show I have a certain threshold of coins, right? but this is done inside ZK, so we have to pick some merkle roots for coins and nullifiers which one is used? you cannot use the very latest unfortunately so my solution is to just use an interval You should be able to use anything as long as you're proving it's unspent Why does it matter how old it is? aha i think actually the DB can store the index incrementing itself brawndo: i mean for verification, you cannot use the latest roots Where? In Money we keep a set of roots for each change in the bridgetree So anything is valid in the ZK proof. I have to make a proof, then get it confirmed. When verifying the proof, the roots used to verify it will be old already don't you think it's maybe bad if the roots are like really old? what if i buy gov tokens then sell them, but I keep making proposals which i can vote on In the Burn proof you're proving the Merkle path corresponds to _some_root There is the set of all roots which this root is checked against (It has to exist) yep but *which* root you use will have an effect Additionally the nullifier would prevent double-spending, no? for example, here's a DAO attack: DAOs A1, A2, ... let i=1, buy gov token for DAO A1, sell gov token, let i=2, ... rinse and repeat now i can make proposals for all DAOs A1 to An where I have a huge voting power and i can vote in all of them Moving that coin should be creating a new coin and thus disallowing voting on proposals that have been created before that coin when the proposal is created, it commits to the money coins and nullifiers tree, so even if you move the coin, you can still vote on the proposal i don't see any workaround for that (because then you can make a proposal and keep moving coins around but still keep voting) so we just commit to a snapshot (by committing to the coins/nullifiers roots) and transfers don't exist in this snapshot Why would you be committing to the nullifiers tree when creating a proposal? so people cannot vote with spent coins Can't you just check the latest Money tree for that? I don't understand the nullifier snapshot It should be checking the latest ones, unless I'm missing something it can use the latest ones, but you need some notion of what you mean by 'latest' to use it in ZK Well likely then the sparse Merkle tree is the wrong choice that is, the contract needs some way to algorithmically select a nullifier tree root You want to not reveal the nullifiers, but also be able to look at the latest state The nullifier state that's being looked at should always be the newest one, since it represents the "current" state of spent and unspent coins Anything that is not the latest can be gamed in one way or another lets say we create a proposal, isn't it ok to simply use an agreed upon snapshot from 1 hour ago? (universal across all DAOs) like you must use those particular roots, none other >use an agreed upon snapshot from 1 hour ago? What does this mean? the state of the coins (both coins and nullifiers). I make a ZK proof using those merkle roots, and get my tx confirmed. The WASM contract uses those as public inputs. block time is 90 secs, so approx every 60 mins = 3600s or 40 blocks, the coin and nullifier roots used rolls over I'm not sure I like this idea Something feels wrong so it's like discrete snapshots: use the roots corresponding to blockheight = floor(current_blockheight/40)*40 Why can't we make something deterministic? wdym? this is deterministic (calculated in WASM) This feels like it'll go wrong horribly for some reason Time is not deterministic it's not using time but blocks 11:18 block time is 90 secs, so approx every 60 mins = 3600s or 40 blocks, the coin and nullifier roots used rolls over Also what do you mean by rolls over? so when we verify the ZK proofs, they need a coin and nullifier merkle root to be passed in. We require that these roots are somewhat recent, and also we want which root is used to be deterministic (algorithmically selected rather than allowing the caller to select one) *we need a ... > hanje-san | so people cannot vote with spent coins I don't think this hold true when you use a snapshot I can transfer the coin, and then make a voting tx using the old(invalid) coin to vote unless you are checking the current money state, to see if the invalids coin nullifier has been revealed brawndo: so rolls over means that the coin and nullifier roots are delayed. lets say block 100 is the current one we're using. At block 140, the contract begins using the latest roots again. At block 141, it uses the same one still, ... until block 180 upgrayedd: correct, but also you cannot double vote by sending to yourself, and also the person you sent to cannot vote (which were the main attacks we wanted to protect against) so the logic behind the snapshot is that only people that holded those coins at that exact period can vote, regardless if the used the coins after? yes why not add a second check, to also check the current state, to eliminate that last part also because of the way ZK public inputs work meaning that they had to hold the token at time of snapshot and at time of voting if i give you a proof for merkle root R, you cannot verify it with merkle root S upgrayedd: because it's in ZK, we cannot see which coin/nullifier is being used. they are completely hidden latest doesn't exist in ZK, the only thing we can do is pick some point in time which i'm saying we deterministically/algorithmically pick periodically I'm not saying to check in zk do the check in wasm contract code we do this here: darkfi/src/contract/dao/src/entrypoint/vote.rs:84 if you do it in wasm then it breaks the anonymity wait the input.mullifier is different than the coin one? no it's the same you cannot check a coin is spent otherwise, that's why we need the SMT well then you are already revealing it no? vote.rs:158 this needs to be fixed hence the recent SMT changes aha i want to remove input.nullifier ideally (since it doxes your coins showing you voted) also when you create a proposal, every with gov tokens is eligible to vote. so it doesn't matter whether we use the latest nullifier tree or not. the only thing that matters is that the coins merkle root is fixed. right now it's arbitrary, but ideally it would be algorithmically selected which coins root we're using the main issue is with proposals imho, when you prove ownership of a threshold of coins to be eligible to make the proposal that people can vote on in this case, right now the nullifier state is the latest but i want to change it to the deterministic snapshot then the issue could be someone acquires a large amount of gov tokens, makes a proposal where they have majority voting power, then sells those tokens but if we're worried about this attack, we can simply use the user_data field of coins (so your gov token has a spend hook with a special contract to enforce this when calling money::xfer()) in the user_data field we store the coins creation_timestamp (blockheight .etc). then inside the DAO proposal, you can only create proposals if your coin is older than 40 blocks deep. that doesn't fix it There must be a better solution This feels hacky ah ok, what should i do then? i didn't find one yet and thought about this last few weeks its not just about making a proposal, its also about not being able to vote after you move your coins i don't think that matters I haven't thought about it but I can ok i'll wait then a bit yeah it does, whale buys tokens, waits 40 blocks, creates proposal, immediatelly sells then and still be able to vote so what? they cannot do it for multiple DAOs lol they must wait 40 blocks each time 1) What they can do it for multiple daos, they can do the init buy on all the daos they want to attack effectivelly passing a proposal to the daos for free here's the attack. I have $100 and i want to vote in 10 DAOs. Normally i would only have $10 of voting power in each DAO, but with this I have $100 immediately. but with the time delay, you only have $100 per single voting period *coin creation should be as long as proposal is open for voting so buy coin, wait a month, open proposal, vote with $100, sell coin, wait another month, ... not a very practical attack hanje-san: Was the prior mechanism fully good, besides revealing nullifiers when voting? mostly yes, except one minor thing which is that the coins merkle supplied is chosen by the caller but maybe it should be selected algorithmically or at least checked to be relatively recent. I don't see why it has to be recent, except if you want to prune the roots set every once in a while The recent-ness I think doesn't change anything i believe that proposals only matter, not voting. 1) if a proposal is bad people will vote no on it, 2) we copy the coins/null state and verify that, and it doesn't matter whether coins are spent. what matters is the state then and there, so the main issue is that the proposal commits to a valid state. yeah true the recentness doesn't matter, you're right actually, but for nullifiers it does matter. There is no solution like this for double-voting I think each proposal should have its own set of coins that can vote on it For example you should know the full supply Then the proposal can create 1 coin with the full supply, and then people who vote could take from that supply that amount which they vote with yeah the supply is (usually) known. the issue is how you allocate that supply of voting power which is what i'm saying, we just snapshot the state at periodic intervals A voter will have their coin, so effectively they would just swap it with some amount in the proposal coin Some kind of wasm code could control this perhaps Not revealing nullifiers will always allow double-spend A Merkle tree doesn't help with this I'll be back in an hour or so, need to run an errand we have this right now where every time you make a proposal, or you vote, you must also add a sibling money::transfer() call ok cya talk later Yeah I'm just saying perhaps we're thinking about it from the wrong perspective Will think a bit, biab ++ maybe keeping what we already have is the way? hanje-san: vks/pks hashes are changed? I think this is the correct pair: 828a7f6d28fc2ec89898f3eb29a00e972609debfab2574e4473f18607a9fd1ec c1f1d1bcab1a6046c5a938931f6eab28d4da5840d4bd80a0f0d719d654699439 hanje-san: What we have now works but is also not ideal hanje-san: Do you think there is some way that works perhaps if we did everything within the same proof? It's probably correct that each proposal has to create some state snapshot But this kills all the coins that might be moved in the meantime So e.g. there can be a proposal you like, and you want to buy some tokens so you can vote on it - it's not possible It's not a trivial problem to solve The info a proposal can have is the total supply of tokens that can vote on it I was first thinking that each proposal could act as some kind of swapping mechanism, but that is also broken once I gave it some more thought The core issue is that we want do prevent double-spend/double-vote s,do,to, We currently have this, with a tradeoff in anonymity: some arbitrary coin is now revealed to be the gov token What are the practical consequences of this? Keeep in mind it's also still not possible to obtain tokens after a proposal is created in order to vote on it (I think this is an issue as well) brawndo: re: proposal you like.. that introduces the attack vector of me transfering my coins to myself so I can keep voting Obviously I still think it's an issue that would be good to solve if possible From a UX perspective it's rly bad Like just imagine LunarDAO investing in a project, makes a proposal You see the proposal and want to join in Sorry not possible you should've been here before the proposal yeah I understand the use case : @skoupidi pushed 2 commits to master: aae713227f: contract/test-harness/vks: updated hashes : @skoupidi pushed 2 commits to master: 41c9bd28ba: validator: updated sled-overlay version and use new diffs logic for finalization brawndo: so what we currently have is that the nullifier is revealed, but then everytime you make a proposal or vote, the wallet must also move their coins at the same time we might decide this is better since it always uses the latest nullifier rather than some snapshot (and maybe more efficient since we don't need the SMT) Which invalidates voting on two proposals yep it would break that unless you do it within the same tx true about being able to join a proposal by buying coins, i didn't know it was a huge issue and i don't see rn how to allow that without also allowing double voting will think a bit more, but lmk whether you think current approach (which makes wallets logic more complicated since it has to track multiple coin states per proposal) or the SMT approach (which makes DAO voting more expensive) : @skoupidi pushed 1 commit to master: 696bc213a0: validator: fixed stupid mistake I'm not really sure. I trust you to make the better choice since you put more thought into it But I'm thinking if there is some generic solution to this What we want to achieve is a simple idea, just a bit difficult in our context Got a dc after my last msg hanje-san: What if we introduce another keypair for voters? Could something maybe be done that way? Like binding the coins to some additional key Or something like RLN which could force each coin to be able to vote only once on a proposal I suppose voting-with-coins-newer-than-proposal isn't doable guys I assume it is ok to ask code questions here, while learning especially initially I had fn from_wif(&self) -> Result in the trait However that doesn't compile, because the return is not Sized I changed it to this now: fn to_wif(&self, is_main: bool) -> Self; fn from_wif(&self) -> Result>; So while to_wif returns Self, from_wif returns a Result> is this an acceptable design? Result<&Self> seemed like an option, but then returning a ref from the implementation looked pretty hard (I understand the concept of not ownership so returning a ref to a local variable does not work) s/not// brb will eat dinner then answer bon appetit b brawndo, so there's 2 approaches. I think RLN is orthogonal to the issue of checking whether a coin being used for voting is spent or not. Essentially RLN is like saying "what if we make another nullifier for the voting" - sure we already do that (store the nullifier in the vote_nullifiers db), but the issue is that we only want to allow coins that are valid to vote and the issue is checking the coin has a corresponding nullifier. we can either do this in ZK, or not in ZK, but if we don't do this in ZK then the coin must be immediately moved because we doxxed it. currently we are doing #2, and could maybe keep it that way (wallet logic is complex, but SMT puts a lot of onchain burden). also as you said we're using an older coins set, but the newest nullifier set. loopr: make a minimal example and post here or in #rust on Libera IRC but that is not the correct approach, you must return Self which is Decodable also remove that bool is_main i told you, there is no consts defined, the trait defines what that byte is ok off to sleep gn > hanje-san │ i told you, there is no consts defined, the trait defines what that byte is I am afraid I don't get what you mean. As per the btc wiki, the byte is different for a mainnet or a testnet related object are you saying that we don't care about this distinction, and we just use one single byte for everything? > hanje-san │ but that is not the correct approach, you must return Self which is Decodable `to_wif` returns Self But `from_wif`, because it is using bs58::decode, can't just return Self. either we unwrap (which I got the feedback elsewhere I shouldn't), or it returns a Result. But Result does not compile on a trait (only Sized allowed). so I am not clear what the way out is. the example is the PR itself (it's actually pretty small to be nearly minimal). https://codeberg.org/darkrenaissance/darkfi/pulls/250 Title: #250 - WIP: First iteration of task 17, WIF formatting - darkrenaissance/darkfi - Codeberg.org In the meantime, I think it's reasonable I could start looking at the hardcoded generater const would it be useful to be able to start a dev network locally with docker images? i know there is contrib/localnet the advantage of using docker images would be that it'd be easier to share work in progress, or run others' branches, etc. it'd make most sense tho if there'd be a central repo for the images somewhere, not sure if that's maybe not ideal loopr: can you explain how that useful? how sharing an image is different from sharing a binary? I run it and then what? If I need to change or fix stuff, I still need to access the code, therefore having a dev environment, etc. etc. for network simulation specifically, why use virtualized environment and degrade performance? as I said, " it'd make most sense tho if there'd be a central repo" then one could just refer to an id instead of passing binaries around yeah I'm asking specifically the advantage of a docker image over just a binary, in that shame "sharing" context s,shame,same (lol) docker pull instead of "please send me the binary you were working on" every project is different, I like workflows custom fit for teams so it may well not make sense here, while it made sense elsewhere automating (reproduceable) nightly builds/tests was one when I'm saying binary in that same sharing context, I mean like a repo containing builds, where you can pull from, not manual coms... the only "advantage" in docker is having a constant environment, but still it depends on the usecase there are pipelines that do automated checks in the gh repo, is that what you mean? yeah in that scenario (I have never seen before), there's probably no difference other than the standardized docker tooling yep pipelines like that i could ask differently is everyone happy the way you can currently run local dev networks? why what's wrong with it? how simpler can it get that a script? s,that,than haven't heard any complains okey doke it's just that I did work on such an env with docker images recently and indeed this is just a question if it might have been useful not saying there is anything wrong with it (in fact I haven't played much with it yet) I don't think such an env is needed, at least right now, it might be useful for testing releases compatibility with each other, but still I prefer not to bloat stuff, since just running binaries over a script is pretty much the same thing ++ . .... hi loopr: yes return Result let wif = String::from("3RH5ferSx2yBBNuLk"); let decoded = wif.into_bytes().from_wif(); this is incorrect let decoded = Foo::from_wif(wif).unwrap(); ^ this is correct impl WIF for Vec { ... } <- this is incorrect, it should be: impl Wif for Foo { fn wif_prefix(&self) -> u8 { 87 } } trait Wif { fn to_wif(&self) -> String { ... } ... } also from_wif(wif) should automatically verify the checksum when decoding gm hanje-san: I think 'fn to_wif()' should give a Vec Then you can work with the bytes If you need a string, you should implement Display to_wif() should return a base58 encoded String Actually not even a Vec but a fixed-size array Display should do that, then you can to_wif().to_string() it can't be fixed size because the type being encoded is variable why do you need to work with the bytes? it has a checksum so you cannot change it You can support different encodings that way It's a minor change for more flexibility i can't think of any encoding i'd use apart from base58? it has a prefix byte + 4 byte checksum, and the user passes it around unless you mean base64 or encoding the string with japanese kanji .etc Well it depends on the usecase What WIF is made for is just for secret keys yeah but we will use it for many different types like passing DAO info, .etc Sure i'd argue that introducing multiple encoding types would get confusing, and we should just default to base58 strings Yeah You're right ++ hanje-san: Say we're storing some items in sled. They could or could not be part of a struct. Do you think it's better to just store the entire struct as a single serialized item, or to just have keys corresponding to the multiple items we want to store? I'm kinda leaning on the latter yeah i generally prefer the latter too. it is easier to debug ++ (see db_info in money) Yeah : @parazyd pushed 2 commits to master: 50f7220341: chore: Clippy lints : @parazyd pushed 2 commits to master: a9a6cb4ef9: darkirc: NickServ account registration... hanje-san: Are your sparse Merkle tree changes still keeping the deterministic ordering? Meaning a user doesn't have to care about the leaf position It's known generally just by knowing the leaf yes correct, and also the position in the tree is the same as the leaf value itself (pos = leaf) ok cool altho the gadget/zkas call allows specifying a distinct pos I wonder if we should use that for the RLN too, did you benchmark the proof speed? Verification speed at least no benchmarks yet but i want to, however the SMT is interesting, it allows doing a lot of new things like: for example with voting, you can have the set of all proposals, then if the coin voted in the proposal, add it to the set. the coin contains the root for that set. Although I dunno if it's good though, it'll use a lot of bandwidth I'd want to keep darkirc minimal since phones however i'm not sure if it solves the DAO issue since if i transfer you a coin, i need to send you the entire SMT tree too which i'm not sure how we can xfer such a large amount of data anonymously The IRC account registrations should be done on-chain I suppose? But then you can't really run a node on your phone yes exactly by staking coins well what if the blockchain is optionally on another device It would work but that's not chat.exe anymore on-chain at least allows you to keep bridgetree which is smaller and faster proofs we could have a faucet for free accounts I think the SMT proofs are big (Too big for a messenger I mean) aha yeah it would be smaller with hyperelliptic curves Like imagine cooking your CPU for each of these msgs Battery goes to zero XD And you can't fully make it algorithmic either because you can't identify a spammer Because in p2p people relay other ppls msgs So essentially you always need a RLN proof You can't just require them at some point i think it's ameliorated with swarming, since you would only participate in the traffic for projs you're interested in Alternatively we could build a smaller hardcoded proof with just bulletproofs perhaps And not use halo2 bulletproofs is good too Still I doubt I can impl poseidon in that lol you could do mimc, but even poseidon is possible i'd just halo2 tbh and benchmark, see if it's any issue Yeah better to just have it working but ok, doing it on-chain doesn't require SMT then since the position in the bridgetree will also be deterministic There can't be a mismatch between two nodes executing a contract yeah but you don't want every message on chain you could use lighter forms of consensus like event graph or a per subnet blockchain No the messages don't go on chain Just the identities for registering an account Everything else is ephemeral great perfect So my basic idea would be: 1. _Somehow_ register a new RLN account on-chain: Then you have your two RLN secrets and the leaf position in the Merkle tree of accounts 2. Register these secrets and the leaf_pos into your darkirc 3. Use the account However you'd also have to keep in sync with other accounts' registrations, I'm not sure how that should be propagated public node? Yeah I suppose upgrayedd: RPC sub feature request: topics upgrayedd: So you can have a JSON subscriber notifying you of a specific contract only This is a bit flaky though, suppose you go offline, then reconnect, you'll be missing some accounts likely So now upon reconnection darkirc has to sync the accounts again before being able to operate properly it's not a big deal since some messages might get through and you might have to retry sending until synched They can be shared through the event graph although there's no consensus on that and is easily scammed : @parazyd pushed 1 commit to master: 1e5e56c9ea: darkirc/nickserv: Feed RLN identity through REGISTER brawndo: ACK it can be an optional param on blocks sub, but I guess having a separate rpc method is more prefered I think the former is better You'll have less mgmt Have every sub have optional topics Then you can for loop easily and check which sub wants what yy that what I was thinking : @draoi pushed 1 commit to master: 5f55e877ae: net: enable hostlist migration + bugfixes... hanje-san: > hanje-san | trait Wif { fn to_wif(&self) -> String { ... } ... } if this is a blanket implementation, doesn't this require self to be of Display? otherwise how can I get self to convert to bytes or string for wif construction : @skoupidi pushed 1 commit to master: 5623914db7: Removed swapd : @dasman pushed 2 commits to master: d47eac66a4: bin/tau: setting access types as read-write and read-only(default) : @dasman pushed 2 commits to master: f7d0b7a5ac: bin/tau: print info on cli when access denied draoi: I'm getting version errors on tau, should I update my nodes yet, or wait for your signal Contributing with Tor - Verify your key by signing the message: echo -n 'XXX' | ssh-keygen -Y sign -n gitea -f ~/.ssh/id_tor is this possible if the ssh key has a passphrase? got an error couldn't find any option ok, -P, but that leaves the phrase in plain text in history, wonder why such a sophisticated tool doesn't support prompting (unless I missed it) loopr: append DISPLAY= before the sign command echo -n 'XXX' | DISPLAY= ssh-keygen -Y sign -n gitea -f ~/.ssh/id_tor likeso aha! thanks its ssh-keygen thinking its not a terminal so it tries to open a gui or something btw any disadvantage of using tor-browser instead of the normal browser with the socks proxy? so I would not have to install another browser normal browser still fingerprints, keeps cookies etc. etc., so its up to you on how tight you want to be ah well, considering one goes through the effort of using tor, feels best to do it thoroughly lol manjaro linux is mentioned in https://darkrenaissance.github.io/darkfi/dev/learn.html Title: Learn - The DarkFi Book : @dasman pushed 1 commit to master: acb3aeb3b9: bin/tau: update cargo's package description <;)> gm ;): gm greets these papers are interesting: https://scholar.harvard.edu/files/mickens/files/atlantis.pdf https://mickens.seas.harvard.edu/files/mickens/files/veil.pdf gm hanje-san: why is this a debug statement? that should be an error right https://codeberg.org/darkrenaissance/darkfi/src/commit/acb3aeb3b98ed21bce4503935dc7cb5c329ca8f9/src/net/message_subscriber.rs#L203 Title: darkfi/src/net/message_subscriber.rs at acb3aeb3b98ed21bce4503935dc7cb5c329ca8f9 - darkrenaissance/darkfi - Codeberg.org draoi: because a node could send you faulty data that doesn't decode... it's not something wrong with the user's setup, nor with our code. it could be a warning but sometimes people get spooked by those ok true loopr: as i said already please look at darkfi serial, Encodable and Decodable types it would be helpful if you copy messages to a text file, and anything you don't understand ask rather than i have to repeat myself everyday dasman: yes plz update can't promise there won't be further updates due to bug fixes but it's mostly done, now just testing phase i posted some up-to-date peers + seeds in #random if you need for bootstrapping also hanje-san, if you wanted to review etc, that would probs be now i have to finish the DAO first, it's delayed cool take ur time no stress then i'll happily do that ty i now have 3 fixes for the dao and they're all kinda shitty codeberg tor is not working for me Not working how? Sometimes pushing hangs yeah just hanging for ages. my tor is fine but i restarted it 3 times It's codeberg not Tor Happens with clearnet too yep https://agorism.dev/uploads/dao.md can you look at this? Just leave it or try again in a bit ACTION clickety-clicks i am tryin to push rn also hanging Yeah the third one is why I wanted to implement arrays in zkas So you could have multiple spend hooks I don't know what the correct solution is i think #2 is the simplest, but also #1 is fine if you prefer >Spent coins spent after the vote are proposed will still be able to vote. This sounds bad it's alright, new coins cannot vote : @draoi pushed 1 commit to master: 0c23ba0947: manual_session: fix bug which caused peers to get stuck in Connect state... It means that I can dump my gov tokens and still vote with them yep but after all current proposals expire, you cannot vote That's not the point It opens a path to hate-voting what's the difference with voting then dumping? vs dumping then voting? In the latter case there is a price change on a market There's no difference really in the way you put it though But there's subtle game theory stuff yeah but it's not catastrophic imho Can't think of any bad scenario Doesn't mean there isn't one :) But sure ;)) lol : @draoi pushed 1 commit to master: 6ba22f7a85: manual_session: avoid risky op when all attempts to manual connect fail... so I got a result: going from commit d160e9616 Mar 16 down in history, commit 532d67e97__2024-02-29 was the first I was able to test and build on arm64v8; the problem was mostly the harness test tests::sync_blocks failed root: that test should be fixed after 41c9bd28ba68beeb2efffdd75c5ea76ca4ef6b95 afk : @skoupidi pushed 2 commits to master: 269cffbd1c: validator: purge unreferenced trees from sled when reseting forks : @skoupidi pushed 2 commits to master: 23d49cd158: validator: use sled-overlay add diff functionality to rebuild forks fyi agorism.dev letsencrypt expires today : @skoupidi pushed 1 commit to master: c15facda1c: validator: fixed not saving PoW module updates to db draoi: panics -> https://bpa.st/4KUA Title: View paste 4KUA dasman: can you send the full log? it looks like your node tried to connect to a peer that it was already connected to, which is forbidden it indicates a bug somewhere else in the program as that should never happen but it's really hard to tell w/o logs, i haven't had this on my side gm for evaluating the cost of a ZkCircuit, configure(meta) is unreachable!(), instead we have configure_with_params(meta, _params), but CircuitCost uses configure(meta). Can I rename configure_with_params() to configure(), and make configure_with_params() delegate to configure()? let cost: CircuitCost = CircuitCost::measure(zkbin.k, &circuit); wtf why are all the fields of ConstraintSystem pub(crate). that's so goddamn annoying i cannot access the debug info since we're maintaining patched halo2_proofs, can we also make those fields accessible? gm hanje-san: No please leave it as is We're following upstream Changing such method names breaks a lot of things https://fs.blog/chestertons-fence/ Title: Attention Required! | Cloudflare https://github.com/darkrenaissance/darkfi/blob/master/src/zk/vm.rs#L311 type Params = (); ^ If you do this, it'll call configure() brawndo: the thing is that CircuitCost is calling configure i'm not proposing to rename configure_with_params(), just simply to make configure() implemented > Can I rename configure_with_params() to configure() you forgot the second part I'll just fix it in the halo2_proofs thread 'zkvm_smt' panicked at src/zk/vm.rs:324:9: internal error: entered unreachable code i get this because CircuitCost is calling configure() (rather than configure_with_params()), but we don't actually use the params in configure_with_params() could we change the visibility of the fields in ConstraintSystem? right now they are all pub(crate), and it's what's used by CircuitCost and CircuitLayout. But the images from CircuitLayout are really hard to see. I tried making it SVG and making the text tiny / super high res images, but I'd prefer the raw data rather than the image. lemme check I pushed a fix for CircuitCost, just pull darkfi I'm going to implement the methods rather than exposing them, since usually you use `mut ConstraintSystem` so I don't want to give a way to fuck things up ok gotcha ty btw i made a util method benchmark_wasm_calls(), i added it to src/contract/test-harness/src/lib.rs, is that ok? or should i put it in util.rs? hanje-san: Which fields of ConstraintSystem do you need? Yeah it's ok in test-harness for now everything used by CircuitLayout/CircuitCost so we can properly inspect our circuits. I will add python bindings commit bot is down draoi: you can see github actions, there should be a commit notifier, then check the logs. most likely just it can't reach public IRC instance hanje-san: Pushed https://github.com/parazyd/halo2/commit/23d312ee30307c47388a810c2997a2d19186e24a Title: plonk: Export ConstraintSystem methods for debugging · parazyd/halo2@23d312e · GitHub I suppose this is fine, lmk when you try it great will begin poking around next few days Cool thanks upgrayedd | root: that test should be fixed after 41c9bd28ba68beeb2efffdd75c5ea76ca4ef6b95; started testing commits one by one from the current top, 48 to go root: but why? draoi: https://bpa.st/54ZQ Title: View paste 54ZQ i should rm saved hostlist before connecting again right? also, yse commit bot down because of this ^ yes* gm! o/ gm I'm exploring ideas regarding the credentials scheme First, the blockchain in this case would be needed to issue an credential-token, but the verification it is fine to verify it off-chain or it would be needed to be on-chain too ? Second, I suppose that normally in Dark-fi the sender need to know the receiver address. But it would be possible right now.. I don't know let say that a party A locks a credential-token in a sm with a commitment of a secret attached, then A send to B the secret so that B can claim the token without A linking its address? given that B knows the secret needed to claim the token the secret is shared off-chain the second is a more general question really (for any token, not credentials) https://darkrenaissance.github.io/darkfi/arch/anonymous_assets.html Title: Anonymous assets - The DarkFi Book https://darkrenaissance.github.io/darkfi/zkas/examples/voting.html Title: Anonymous voting - The DarkFi Book https://darkrenaissance.github.io/darkfi/zkas/examples/sapling.html Title: Anonymous payments - The DarkFi Book https://darkrenaissance.github.io/darkfi/spec/contract/money/scheme.html Title: Scheme - The DarkFi Book there is a contract or issuer of the credentials therefore there is a set of all issued credentials to delete credentials, a nullifier is created : @draoi pushed 2 commits to master: 0329e2b296: settings: delete deprecated quarantine setting : @draoi pushed 2 commits to master: c2941c4726: error: more descriptive error handling for state transitions to prove a credential is valid, i prove i have a credential in the set of all issued credentials : @parazyd pushed 1 commit to master: 68214cde3e: chore: cargo update if you want the ability to delete credentials then use the SMT (recently added) to authenticate with a service, you attach a proof about the credential (which commits to several attributes) maybe try to find a concrete example if this is too abstract : @parazyd pushed 1 commit to master: 3192390fa3: chore: Update halo2 repo ref Ideally that set is represented with a incremental merkle tree right? : @zero pushed 2 commits to master: 70cce66740: doc: dao notes : @zero pushed 2 commits to master: 05ea80bd99: wasm tests: add option to generate a CSV of benchmarks ash: correct the best thing to do is understand how the money::transfer() works, and then how you would use the same inc merkle tree + nullifiers to do voting draoi: I set peers = [...] manually, and redelivered the missed commits :) and then once you understand voting, you understand fundamentally how zk contracts work essentially because now you have sets you can add things to and remove from the pinc stuff still happening panic* Good. What I have seen is that credentials are issued by signtures on documents. But I'm confused, in this case the proof of membership (I am a valid leaf of the tree) replace the need of the signatures right or not? Trying to figure out the crypto primitives needed :S hi there hey loopr Hi brawndo btw you have my pub key? ash: correct good (y) thank you Now I have more clarity. My idea of the tutorial is to take as an example a contributor credentials. Where one of the attributesis the level of permission of the project: (1) Core contributor (full w/r); contributor (partial w/r); auditor (only read access). Either you reveal it or make a proof about it When it is ready I will send the document that I'm currently working loopr: I don't think I do This is my pubkey: 6NpTuikk64ejox5h7TyRJG7F3x1ea3tYWs7KwMvFw1HE brawndo: mine: 4rzHWemAB35pLjGZeKeCdGYKRa3ZG5QNRGcrJecwjgU3 Mine: 3Jf8aEHStM3HKN1U2LRprpdiWL4hqSFkui8r38a922vZ loopr: Can I text you to test private mssaging? ash: sure, give me a sec to restart ircd with your pubkey thanks! : @skoupidi pushed 1 commit to master: b70ade1922: validator: permanently store ranks as blockchain expands : @skoupidi pushed 1 commit to master: 7edbac5c65: darkfid/tests: minor fix dasman: can you confirm on which commit hash the panic happened? draoi: 05ea80bd9944660105773ea050e255dca24e6160 it only happens when I'm connecting to my own seed ty no problem at all also can't recreate it locally gm hey hi ash: you can add me? [contact."narodnik"] contact_pubkey = "Didn8p4snHpq99dNjLixEM3QJC3vsddpcjjyaKDuq53d" gm : @draoi pushed 1 commit to master: 1d6f1175be: net: fix bug that was causing duplicate connections to seed nodes... It's getting better :3 wew fixed fyi dasman ty for reporting : @draoi pushed 1 commit to master: b5764d2c9f: store: remove redundant Result<()> type on register_channel() : @skoupidi pushed 3 commits to master: 5cac7b404d: sdk: chore clippy and typo : @skoupidi pushed 3 commits to master: 74ed38a7e6: blockchain/contract_store: added auxilliary fn get_all() to WasmStore and ContractStateStore : @skoupidi pushed 3 commits to master: f46c3f7a68: script/research/blockchain-explorer: updated to latest darkfi structures draoi: awesomely awesome! yw \o/ : @skoupidi pushed 1 commit to master: 8871f0898d: validator: eat ze bugs https://github.com/informalsystems/quint Title: GitHub - informalsystems/quint: An executable specification language with delightful tooling based on the temporal logic of actions (TLA) we can use this to spec p2p and check it for correctness note to self: we have 2 "smart contracts" pages under arch on docs brawndo: snapshot voting is common in eth: https://ethereum.stackexchange.com/a/127346 Title: For decentralized governance on Ethereum, why is Snapshot considered "off-chain" but Tally considered "on-chain"? - Ethereum Stack Exchange further votes can be announced ahead of time so people have opportunity to buy in (we could later add a delay feature if needed - cosmos gov module has this) : @zero pushed 1 commit to master: 74b7d2f7b8: py: Proof.create() return None when failing rather than use .unwrap() : gm : someone is helping us test lol https://xeno.tools/uploads/testing.png : @zero pushed 1 commit to master: 48d23df367: create zkrender tool to plot circuit layouts : @zero pushed 1 commit to master: baa0146834: tests: add zk benchmark : @zero pushed 1 commit to master: ae0e8f6ff3: doc: add desktop zk benchmarks : @zero pushed 1 commit to master: 544a7b7a21: doc: add laptop zk benchmark : @zero pushed 1 commit to master: 783aea03b2: doc: add graph of zk benchmarks haume: Sounds good : @zero pushed 1 commit to master: 99d5e54883: test-harness: change wasm benchmark to use microsecs instead of millisecs : draoi: I doubt it's someone : probably a bug storing outbound addresses? : hi : is ircd fully dead? : HCF: hi : no : ircd is still the main thing, but darkirc (this chat) is under development and test : ++ thanks : so I should build and run ircd from the release commit? : np : yes tag v0.4.1 gm so there is not a single commit that would pass the build and test on my ARM (rk3588) going from Mar 22 down to Feb 29 but the funny thing is, between working and non-working git diff 532d67e9..bc1f903d, there is only python bin/deg and readme and toml cfg added .. I feel lost :D : check the ports dasman : it's the same node accessing from many ports : we had similar attacks/ testors before : the good news is they were filtered out by the refinery hi o/ o/ !list No topics haumea: The benchmarks are likely incorrect since you're just increasing k but keeping the same circuit : hi o/ yeah i'll look into that too, there's a difference but i'm not sure how big it is : testoor might be me? I am using my configs from a while back so maybe I'm connecting to the infra in the wrong way : hey greptile : config should be the same i think : yeah it's working fine : I signed in yday and you made a similar comment about the presence of a testor : could be a big coincidence but I wonder if my machine is doing something odd : it's as if there's a loop and someone is spinning up Connectors on different ports : but seems i cannot connect to the addresses : hm I'm running ircd and darkirc but I'd be surprised if that was the result : when you say 'Connector' is that a component in darkfi code somewhere? : btw set this in the config: hostlist = "~/.local/darkfi/darkirc/hostlist.tsv" : or similar path : thanks I'll take a look. : Connector is basically the underlying thing behind outbound session. outbound connections creator Connectors and inbound sessions create Acceptors : I let the bins regenerate my configs but I re-added manual peers and my contact list : the automatic path for hostlist is null which means it gets deleted on stop : ah I see : but if you have a hostlist you should reconnect more seamlessly : do I need to add to it manually or should the IRCs populate it? : you need to specify the path but then yes it will be written to by the node : the refinery is really cool. I haven't read that part of the code before : that's new :) : based on monero p2p : it's called "greylist housekeeping" in monero : ah makes sense, I remember seeing similar logging when I played with monero : imo this is probably an attacker doing a 'port scan' of darkfi nodes : it's really common, happens in normal web land all the time : interesting : most of the time people use a tool called `nmap` and it's not flexible enough to be able to trick the refinery : it does create a lot of extra work for the node especially as there are more hostile addrs than safe ones rn : but good that it's coping somewhat : that's probably going to be true forever :) nmap takes just a second to launch and network requests are basically free : do we have benchmarking on where the 'extra work' is taking place? there might be a way to more aggressively drop nodes : and/or limit the number of attempts we try to ping them, because that could cause extra effort : I've only skimmed that part of the code so maybe these mechanisms already exist : the refinery just pings a node once and then deletes it if it's inactive : but when we receive info from another node it could be re-added to our greylist and go through the refinery again : ah ok : this is intentional since we don't want to start banning nodes that have patchy connections : however rn we have v little protection from attackers : there's a method called 'blacklist' that we basically never use draoi: can't the refinery do something like "grouping" based on peers addr? I mean if all these "peers" use same addr but diff ports, you can easily identify them : maybe nodes that are trying to connect with the same ip but over many ports can be considered hostile, idk really draoi: just remember that you need to account for some ports, as connections can be both ways aka inbound and outbound : another metric might be if they try to connect using ports that are always wrong or with protocols that are obviously wrong : do different node services have expected ports? e.g. something like "we always expect RPC requests to happen on port XXXX"? : if so it is possible to create a list of known-good ports and maybe consider connections to other ports to be suspicious : diff services have default ports but the ports are configurable : but i suppose we could restrict that : or the known-good list could read from wherever that's configured : it could be dynamic : like when the node spins up it could make a list saying "these are the ports from which I am expecting connections" : ah i see what you're saying : so the nodes are restricting to the port configured in their node : *restricted : yes though 'restricted' might be a bit over precise : connecting on ports other than 'known good' ones could be suspcious but not like an instant ban if that makes sense : actually let me explain my whole thought process. the context might be helpful. I'll be brief : - Assumption: someone is doing an nmap scan : ++ : nmap takes a range of ports as an argument : ty : so you can sweep the whole range of ports (a u16 so 0-65535 or something like that) : this is inefficient so often times people will use a range of "top ports". these are the most common de facto ports for different servicesx : so: 22 for ssh, 80/443 for HTTP(S), etc. : if we see scan traffic that appears to be probing for "top ports" I think we should consider that very suspcious : there is no valid case where a node should be checking if SSH is exposed, for example : *exposed on another node : ++ : so I guess I'm suggesting to design _against_ that pattern : makes sense : another tactic: Solana nodes allow things to be configurable but only within a range. So you can expose e.g. your RPC port but only within the range of 8000-8020 : connections inside the range can be considered normal and outside the range are suspcious : so we have limit configuration in a sense but still leave plenty of options : https://nmap.org/book/port-scanning.html#most-popular-ports Title: Chapter 4. Port Scanning Overview | Nmap Network Scanning : nice : this makes sense : great :D : i'll reflect on how to implement that, the port range restriction seems v easy to do : ++ : I think it would be pretty effective too : then for the "top ports" i suppose we just disallow them since they're out of range, and maybe reject connections from peers with those ports : yep I think that's safe : a good minimal solution could be to ban anything below 1024 : nice : these represent privileged root-only ports for the most part : also anything that's specific to Microsoft services, mail servers and so on : ++ : I have to step away for a while. I hope to be back for the dev meeting but not 100% sure on that : nw, tysm for your input : :) !topic p2p hardening Added topic: p2p hardening (by draoi) : np happy to discuss more later : bye for now o/ : see you o/ hello fyi running ircd on my rpi 3 was crashing the device periodically running it somewhere else now What's the crash? !topic dao update Added topic: dao update (by haumea) gm! hey ash, can you add me in DM? [contact."narodnik"] contact_pubkey = "Didn8p4snHpq99dNjLixEM3QJC3vsddpcjjyaKDuq53d" hi o/ sup brawndo: not sure, it would just go away, I assume memory but no traces of oom so ¯\_(ツ)_/¯ I mean the whole pi would suddenly go down hi holla bulla bullish hm weird perhaps you could tail dmesg There's also netconsole in the kernel Hey all !start Meeting started Topics: 1. p2p hardening (by draoi) 2. dao update (by haumea) Current topic: p2p hardening (by draoi) loopr: I'm running ircd on a riscv soc with 1 core and 1gb ram, so I press (x) to Doubt re memory haumea: sure gm hey so you may have seen the convo with greptile earlier, they had some good input on protecting against the nmap attack which we seem to be experiencing on darkirc basically 1. restricting ports to certain ranges, 2. blacklisting peers that use "top ports" (anything below <1024) any short summary? i'm feeling unwell (slight cold again) ^ Why is it an attack? upgrayedd: I see, well didn't want to spend too much time debugging for now, but I might go back to it. thanks for the pointer yeah why is that an attack? I mean, what does it do to the p2p? I stopped my darkirc node for now, couldn't sync dag draoi: I don't think top ports is accessible the "peer" is always hitting our advertised port and then the kernel assigns a new port for the direct stream !topic dag sync Added topic: dag sync (by haumea) so if for example you run a node, I can make 2 connections to you via lets say 1840 port draoi: so what's the attack? first will get some random 18401 and the second 18402 sec, finding the name the only think I see is having multiple connections for same end peer ah yeah port scan attacj hence why I mention "grouping" based on addr earlier attack draoi: Doesn't explain the issue port scan is a system attack so the attacker is scanning for any open ports by checking what ports are open in your advertised ip it has nothing to do with our p2p That's a systems problem, you should run sshguard or fail2ban spinning up nodes on multiple ports like looping from port 6000 -> 7000 the fact that our p2p is having so many same connection ports to same addr, is that they constantly retry to connect to us therefore we store each new "attempt" under a different assigned port by constantly retry to connect I mean the spawn a lot of processes that connect to us, not actual retry why does restricting port ranges protect against port scanning? haumea: its not, probably missunderstanding of the attack bc you reject peers that don't use the ports within the specified range draoi: the peer will always hit your advertised port and then kernel will assign a port for that specific conn yeah that doesn't make sense... ok that's what you store in hostlist so if I constantly connect to you with different processes you will have a list with all the connects you opened with me all same addr/ip, different ports basically, we have nodes with a single IP that seem to be running outbound sessions on multiple ports (we are talking like 50) and clogging up the hostlist why is that happening? draoi: yeah you should handle peers as a group of same addr/ips umm that's totally normal i do not know, greptile said it looked like a port scan attack haumea: I explained why/how that can happen yeah but your explanation makes it sound like a bug Each outbound session will assign a port for a connection I constantly create new processes and connect to you, so your kernel generates a new port for each con s,port,random port, and you end up having multiple records in your hostlist because external_addr is set in the config file that's the addr i sent around the network, not the address assigned when connecting to an inbound external_addr has nothing to do with it only external_addrs should be allowed into the hostlist how are other addrs getting into the hostlist? they're not only external_addrs go in the hostlist the other party can generate a list with N external_addreses therefore sending you all of them so are these node(s) with many external_addrs set and different ports? same external_addr, different ports yes possibly this pretty easy to do Nothing wrong with that either you just spawn a proxy lets say we restrict port range like we do for tor what's to stop me spawning tons of ips? not much you can do against that restrict port range is not the solution lol we established that XD esp since we're designing for tor too yeah it's a weak solution, i'd rather do nothing so normally for external addrs to be broadcasted they must be valid reachable addrs m2 Is there any issue in the current code relevant to this or what are we talking about? :D idk continue? on darkirc there's a flood of nodes with the same addr but different ports *shrug* Well that can happen anytime wait, is there any issue with this? surely the network is able to operate you will eventually filter them out no? They should just be pruned if unreachable after a while if not seems more a flaw of p2p design rather than needing to restrict port ranges since the first failed ping, they should go into greylist and retry later it's creating pressure on the nodes since we have no few healthy nodes on the network so it's 95% these haumea: ++ s/no/so ok well if you want, you can try to vary hosts you pick like i think nym tries to pick geographically distinct ip ranges, but we don't need to go that far just when sampling random addrs, prefer addrs where we aren't connected to that host already normally if the external_addr is unreachable it shouldn't be broadcasted, but these nodes seem unreachable, so maybe it indicates a bug- i will check draoi: I guess the issue is that they are not getting filtered fast enough if they are unreachable, the issue is as follows: recv addrs -> go to greylist -> get filtered out -> recv again -> and so on forever slowing down nodes with garbage so they end up broadcasted since the node might not have actually checked them first catch22 lol since greylist is persistant you can check weither or not you filtered them out in the past upgrayedd: for external addrs node actually pings themselves and directly add them there it's seperate to the refinery greylist is not persistent, greylist forgets nodes that fail the refinery ah yeah we removed the blacklist because of tor and it not being needed but maybe there's a cash for some kind of temp buffer aha ok didn't know that *a case blacklist still exists it's just not being used well you can use sled as a cache in the disk so its persistant no pls however you can still spawn loads of fake external_addrs lol haumea: normally if the external_addr is unreachable it will not be broadcast maybe if a node sends too many unreachable addrs, ban them? because it means their refinery is faulty nodes cannot send unreachable addrs since they ping themselves oh i was talking external addrs yes but if they do then it means they're faulty/misbehaving You can spam whatever addresses to whoever, it's not real rn the network is still working even tho it's 95% these addrs, and it should still be OK even if it's 99% since we have "anchorlist" connections that are stable however it does create pressure/ work on the refinery so with this logic, when you add an addr, you take a weak ptr to the channel. after refining, you then get the channel ptr, and you raise the watermark the watermark is the subsystem handling when connections breach DoS limits, and when breached then gets banned which means if a channel sends N unreachable addrs, it gets closed/banned rn nodes send their greylist that means that you can close/ban the seed tho in protocol addr the algo is like this: send from anchorlist, if there's space, send from whitelist, if there's space, send from greylist (cos we need peers with diff transports) iirc the nodes send a % of whitelisted and % of greylisted we set the limit quite high so it only happens if sending too many unreachable hosts when the network has v few stable nodes and many many sketchy greylist nodes (like now), normal peers are broadcasting unreachable addrs That's not a good solution for a small network, anchorlist and whitelist will be small so majority will be from graylist You just need more efficient pruning ok that also makes sense therefore you will close the seed and never connect if they mostly have unreachable graylisted addresses does this problem disappear with more healthy nodes on the network? it doesn't disappear but it improves Likely I don't think so It depends on the number of your outbound sessions wdym re: pruning brawndo can you elaborate But most cases you'll want up to 20 or so and what part of the network you connect you More connections is just overkill On a healthy network you should easily be able to find 20 peers if you for example you only connect to border nodes you might end up in same situation rn we have maybe 3 or 4 good peers lol ACTION upgrading darkirc node now ty I can run a few too if things are rdy when I am running my ircd I also am such a peer correct? no we are talking about darkirc loopr: we are talking public nodes ah ok sorry the ones who advertise an addr to connect to if you haven't set an external_addr no you are not such a node I can certainly run one or so too, are instructions on the docs page? prob building a darkirc binary from master? brawndo: lemme run a couple final tests then will give you the go ahead to spin up nodes ACK[ ++ fine to move on from this topic, will read over and consider what was discussed loopr: Yeah it's just darkirc from git HAD *HEAD ++ !next Elapsed time: 29.0 min Current topic: dao update (by haumea) i want to run more tests and understand broadly the costs between #1 and #2 #1 and #2? strategy #2 (the SMT) is more expensive but conceptually clearer #1 is what we currently have and is more efficient, but less correct and makes impl wallets tricky !topic consensus updates Added topic: consensus updates (by upgrayedd) i reckon the cost is not that bad so i'm slightly leaning towards SMT currently, but don't want to rush into committing to a bad decision can you elaborate more on pros cons of each? maybe it's wrong since i'm preferring on chain cost to make wallet impl simpler, but it seems worth it and what the potential problems you looking to solve https://darkrenaissance.github.io/darkfi/arch/dao.html#anon-voting-mechanics Title: DAO - The DarkFi Book i started doing benchmarks and improving tooling, https://darkrenaissance.github.io/darkfi/dev/bench.html want to understand the costs for zk proofs better Title: Benchmark - The DarkFi Book Spent coins spent after the vote are proposed will still be able to vote. yes but this is fine, it's common on eth if you also check the current money nullifiers they won't https://ethereum.stackexchange.com/a/127346 Title: For decentralized governance on Ethereum, why is Snapshot considered "off-chain" but Tally considered "on-chain"? - Ethereum Stack Exchange yeah but that has 2 downsides: - links votes across proposals (same nullifier) - wallet needs to move coins simultaneously while keeping track of spent coin (i make a proposal then vote) ain't that also means that I can spent the coins to myself to keep voting with my coins, no? where spent=transfer no because we snapshot the coins state so new coins are unusable (this is the main issue why we're snapshotting) aha hence why you need to spent the coins after vote so they won't get linked gotcha gotcha gotcha yep but with SMT you don't need to spend the coin after voting since the nullifier isn't revealed so it's nicer but more costly our gadget is completely unoptimized so the circuit looks retarded https://agorism.dev/uploads/smt.png well thats when you live in the cutting edge of a space :D so much wasted space we could introduce more columns so less rows are used .etc You're not counting the other chips that will be in that circuit yep exactly More columns means slower verification yeah this requires k=14 but we don't use all the rows mhm i want to explore exactly what those chips are but tbh maybe k=14 is not that bad It's not bad it starts getting terrible when k=16+ We have the scaffolding for circuit optimisation too Just need to come up with a really good algo And that is a bit hard i think it's so weird this stuff is done manually really good algo: collumn empty: chop it XD but i guess early days Early yes yeah i thought about that, if a gadget isn't used then disable the column Also it's intuitive actually but tbh we use most gadgets in nearly all prod circuits But yeah a machine can do it well if you teach it well ideally you could feed it some kind of expressions and it would figure out how to split the columns/rows. there's even a cost model however lookup tables and other quirky things make it harder to reason about You know all the gadgets that are needed and all the opcodes ahead of time yeah i mean even the structure of the gadgets themselves The algo needs to know how to lay it all out in the most optimal table like what gates are used, or the shape of the gadget each rotation is an exponentiation .etc anyway off topic !next Elapsed time: 16.0 min Current topic: dag sync (by haumea) dasman: so there seems to be dag sync issues, tau is not showing tasks, my darkirc isn't relaying .etc where is the tool to see the graph? you have a load of tasks, what's happening? I reckon it's peers issue, dag sync works and worked quite well It's in bin/deg ok nice I've finished two of them, and working on event graph replayed rn Replayer* nice deg looks good ty will try to figure this out with tau I updated my nodes, but darkirc node just crashed maybe tau is too idk so for deg, is it showing the graph? cos i just see a bunch of info but not the actual tree you know like `tig` no it doesn't show the graph, it's a debugging tool to see the msgs and info about the graph yeah but how do i know the structure of the data that i'm viewing? it's just a list of hashes displayed for example with tau, i have no idea what i'm looking at am i looking at darkfi-dev DAG or some other dag? You also got unreferenced tips and boradcasted ids so you know what's has been broadcasted by you and what's received yeah but how can i see the DAG? i have this issue where i can see nodes connecting to eachother and doing protocol addr/ protocol ping but the event graph is not syncing why would that happen? if the other peer also doesn't have the latest tip? you can't actually veiw the tree using deg I'll make that happen, it actually makes sense right now i have tau which is not properly synced, and i connected deg but it literally just shows this: https://agorism.dev/uploads/screenshot-1711382227.png I went fully like dnet it doesn't even need to be a tui, it could be a cli, but it's more important there's a viewable representation of what's happening internally so when we see an error with sync, we can debug exactly which event causes the issue there's no info here draoi: you ask for tips and compare them with your local tips and check what's missing, and then ask for missing tips /events So if a peer is "undersynced" you won't sync from it haumea: ++ ok lets move on !next Elapsed time: 12.5 min Current topic: consensus updates (by upgrayedd) o/ hihi greptile o/ hey we missed you <3 likewise so I've been running contrib/localnet/darkfid-five-nodes over the weekend to see how the consensus protocall goes so 5 miners so far sitting at block 1550, no-one got ever out of sync, everyone always diverged to the correct fork memory consuption for the darkfid is like ~3gb and another ~1.5gb for minerd we should update the consensus page (but we're all busy), i got a master's in PoS friend who will happily take a look so future looks bright bright dark obviously they are all local nodes, so some time in the very near future we should test out with real life net conditions sled db size? haumea: what page? arch/consensus brawndo: 32MB ahahahaaha haumea: its already updated to reflect PoW do you mean something else? That's nice sled-1.0 will also have proper zstd compression ACTION just realized (s)he must update the math formulas well its empty blocks, just the coinbase txs looks like some script shooting txs could be useful? re: sled db size Yeah still good for 1.5k blocks well iirc we haven't really setup the block bundaries right now we still use just the 50 txs limit i'm actually surprised how fast wasm is You didn't believe me :p 9 MICROsecs for an empty wasm call 9 micros is literally 0 haumea: last time I checked the consensus formulas they were good unless something changed that I forgot btw I made hella lot optimizations in the way we handle forks moved finalization time from like 10secs to instant oh yeah i tried showing him but didn't see the formula, just saw on the page it says: "To measure that, we compute the squared distance of its height target from MAX_INT" LMAO all thanks to sled-overlay i guess euclid or some other ancient greek wrote that lol >chad greek writes math formulas and leaves no explanations anyway, to wrap up, still a lot of testing to do, especially wrt txs, mallicious attacks to forks, network conditions shit like that brawndo: there's no math formula, it's just text :D haumea: feel free to add all the formal stuff yep will do, it's simple also will be happy for anyone to read and discuss/feedback the consensus logic my gate is always open + anyone that wants to test stuff with me, with remotes nodes etc ash: here? Here you added me? Yes sent you a DM upgrayedd: does a txs load script exist? ash: contact_pubkey = "3Jf8aEHStM3HKN1U2LRprpdiWL4hqSFkui8r38a922vZ" (your key) loopr: well we got some old benchmarks/tests generating random txs but not one to test against a live net, since drk is not ready yet so maybe a rust script could do the job ++ python put some love in the bindings rust/python whatever there are also txpool optimizations I want to do i love the python bindings haumea: Give me a sec since past impl had a lot of naivity so yeah a script would be handy ash: np but these are not deal breakers, as they are more like worst case/full load scenarios I'd be happy to give it a go if that's something we'd like to have tldr: consensus/darkfid/minerd is getting close for proper PoW testing next? re:remote testing, how do you want to do that? spin up some cloud vps or smth? yy, tor nodes, shit like that btw loopr, when you merge upstream, rebase your commits on top, don't merge the branch directly to not fuckup the history/graph ah brb (restarting) I had rebased 1st time, then today was doing again, and saw it wanted me to do the same conflicts again so I resorted to merge... a good workflow: git checkout ; git fetch origin/master; git rebase -i origin/master; git push b yeah I started out with that (it's then git push -f ) greptile: git checkout ; git rebase master; git push -f less commands no fetch? I didn't know rebase did a fetch if so You need to fetch/pull first well I assume you pull master locally anyway !next I never pull or merge but that's just me !next Elapsed time: 18.0 min No further topics anyway glhf everyone Thanks everyone thanks all ty o/ ty ty I have some questions if someone will indulge me ty all cause I'm a bit behind on things yes you also have me in DM Go for it yeah haumea I guess point 1: I have my old darkirc/ircd configs but no longer have access to the ggreptile github (or email) just FYI. can sort that out later 2. are we still GH based or is codeberg involved in our workflow now? we use codeberg over tor now <3 amazing https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html Title: Using Tor - The DarkFi Book OK I'll spin up another pubkey maybe someone can add it to codeberg later? otherwise I can do PRs, don't really mind Sure just make an acc on codeberg ++ 3. my darkirc logs seem.. off. it tries to connect to a bunch of different ports on a few different hosts. so draoi I may be the "attacker" lol not sure if anyone else is experiencing that. I did a fresh build off of master but maybe my old config has something strange in it? !end Elapsed time: 5.1 min Meeting ended 4. I missed the chat logs for the past couple of hours. lmk if anything came up with the p2p hardening that I can comment on. otherwise, all good greptile: http://agorism.dev/log i also recommend cycling your nick periodically ;) any chance can you send logs greptile? in DM if you prefer ty for the logs definitely planning to cycle the nick, just wanted to say hi again :) welcome, glad to have you back :D contact_pubkey = "GNuVxM7358FSbDqZZw5dhkK2CHdhyxpP9cS2WhYao2uW" ^ if you can send me logs contact_pubkey = "27A7XkNbqkfD3xDFqtPuaShkxs2jY5LicjzzWgZHA3R8" there's another bug that's making CTRL-C hang on stop, trying to reproduce but can't get DAG to sync fml ty yeah I was having ctrl-c issues too ACTION facepalms my bad what's the cmd to like reboot ircd without actually doing it again? it's like send SIGHUP or smth sorry too much of a linux noob to know what I'm doing there I just killed the process with kill -9 we are talking about reloading ircd w/o restarting errorist but i also dno how to do that lol too fancy I just pull my power cord to kill everything :D the only way you can be sure is to unplug and then leave town for a while sell your belongings XD XD rofl just buy a new laptop instead of rebooting i once unplugged and woke up in the machine world haumea: that's deep upgrayedd bull market vibes mfw when unplugging https://c4.wallpaperflare.com/wallpaper/747/746/475/the-thirteenth-floor-abstract-wallpaper-preview.jpg ok biab haumea: did I get any closer? https://codeberg.org/darkrenaissance/darkfi/pulls/250 Title: #250 - WIP: First iteration of task 17, WIF formatting - darkrenaissance/darkfi - Codeberg.org prob not exactly the way you suggested, but maybe you like the blanket impl for Decodable/Encodable (courtesy idea brawndo) hey will check tmrw, i have a cold rn haumea: sure, get better soon ty upgrayedd: for a transaction generation script: 1. any example I can use 2. are there prefunded addresses (and their private keys) accessible in a local net/genesis? or would you rather prioritize smth else regarding network testing stuff loopr: you can generate a genesis mint tx for a wallet like we do in src/contract/money/tests/genesis_mint.rs and then feed that into script/research/gg to generate a genesis block containing that tx that way you can have a wallet with a whatever amount of drk to play with then you feed that genesis block into contrib/localnet/darkfid-* so the node(s) you wanna play with have that tokens minted and you start testing against them upgrayedd: ok I'll see if I can make sense of this info, thanks loopr give me some time for? patience frendo haha null problemo loopr: you know how pipes work right? yo loopr: check contrib/localnet/darkfid-single-node/README.md to see how to init a new wallet while the node runs then check script/research/gg there you can do shit like: ../../../drk -c ~/darkfi/contrib/localnet/darkfid-single-node/drk.toml --wallet-path ~//darkfi/contrib/localnet/darkfid-single-node/drk/wallet.db wallet --secrets | head -n1 | cargo +nightly run --release -- generate-tx 10.54 > genesis_txs/test && cargo +nightly run --release -- generate > test_genesis_block_localnet to create a new genesis block containing a genesis mint tx for your wallet so you can then move that to bin/darkfid/genesis_block_localnet, rebuild darkfid and restart node after ./clean.sh to have that block as your genesis then feel free to wreak havoc with random txs noice that's cool I guess the anatomy of tx is somewhere in the docs btw you should keep the secret of the wallet, as after .clean it should be lost in the darkfid-single-node folder so you can just import it using --import-secrets gotcha for pipes we use bs58 encoding or base64 tx is bs58 ++ what do you me anatomy? s,me,mean tx structure I assume fields, types etc. the structure of a tx so it is valid exactly HCF sec https://darkrenaissance.github.io/darkfi/dev/darkfi/tx/struct.Transaction.html Title: Transaction in darkfi::tx - Rust thats the actual structure awesoman I don't know why that matters tho XD well if I want to create tx to create some load I need to know how they look like right? oh I would suggest looking at src/contract/test-harness/src/* there you can see how each call of our native contracts gets builded cool will do and drk obviously, the actual wallet, but its work in progress thanks for all the pointers, this should be some fun you will see that drk builds txs the same way as the test-harness so they are mirror-like I suggest you first get familiar with running a node and initializing a wallet(contrib/localnet/darkfid-single-node/README.md) and then start looking at creating actual txs for sure but as of now, all the tools are there to create custom genesis blocks for testing feel free to reach if you want something more specific appreciated one note: I don't care about devex, hence the pipes devex? devs should learn to use the terminal and unix style tools developer experience oh no worry a fancy new term for incepetence of please make the tool use an json api for calling states shit like that you know the drill yo \o/ oh and btw forgot to mention use same addr in the node/miner so you get the free rewards XD but I guess with a custom genesis you can pretty much premint whatever amount you want to play with why is that relevant for testing? to check the rewards are working? its more relevant for forks testing since we want to ensure that correct miners got the rewards okey doke for example minerA is the block producer for ForkA, minerB for ForkB ForkA gets finalized, so minerA gets the reward and not minerB ++ HCF are you still here? gm hey upgrayedd: Yeah I think the tx encoding should be base64 rather than base58 It gets no benefit being base58 b58 for humans, b64 for protocols Yep brawndo: oh noice, I will update rest stuff from bs58 to base64 should I also use deserialize_async everywhere possible? i think it's good for network code since it minimizes buffering. you just immediately deserialize from the socket stream altho we use buffers right now, so it's not really used dasman: with eventgraph we need to be able to use a tool to output the tree for debugging so when our eventgraphs are not synced, we can see where they differ ideally we could run a local eventgraph (maybe in python?) where we run the log through the simulator, and then try to feed it the missing events upgrayedd: Yeah it's good to use it I didn't make DaoParams as base64, but I guess that can be too brawndo: ok will update darkfid and rest tools that use encoded stuff to base64 and deserialize_async Thanks Funnily enough I saw that drk<->darkfid do: 1) drk decoded base58 tx 2) drk encodes tx to base64 and sends to darkfid over RPC :D yeah thats because I try to minimize darkfid external deps so thats what you get :D some building reults, the latest test (and zk bench) have eaten over 40 GB RAM, so I skipped the tests in docker builds images for almalinux fedora ubuntu debian rocky oraclelinux for 2024-03-24_ecb3b833f created for x64 let's see what my ARM is going to say ... nice work haumea: any idea why bench_zk.rs fails? hey checking it worked for me the other day what error do you get? root: btw you should use --release for zk stuff cargo test bench_zk --release --all-features -- --nocapture it's working for me upgrayedd: ^ haumea: perhaps pipeline vm uses not enough mem? checking locally haumea: how long does the test take? I'm on k=16 in general benches shouldn't be included in pipelines but run on demand, so the test should be ignored upgrayedd: ok i will change this to cargo bench the reason i didn't already was because i couldn't find how to do named variants, but i found yesterday benchmark groups .bench_with_input() https://github.com/getsentry/relay/blob/master/relay-cardinality/benches/redis_impl.rs#L137-L165 Title: relay/relay-cardinality/benches/redis_impl.rs at master · getsentry/relay · GitHub so will change to this for k=... will do that later today (quick errand in meatspace) noice btw so far I'm seeing >4GB of ram consumption so I'm pretty sure the pipeline can't handle that I don't think it has that much memory can we add this for benchmarks? https://github.com/bheisler/criterion.rs Title: GitHub - bheisler/criterion.rs: Statistics-driven benchmarking library for Rust https://bheisler.github.io/criterion.rs/book/getting_started.html Title: Getting Started - Criterion.rs Documentation then i can create a testing group also it does graphs: https://bheisler.github.io/criterion.rs/book/user_guide/plots_and_graphs.html Title: Plots & Graphs - Criterion.rs Documentation but this is what i need: https://bheisler.github.io/criterion.rs/book/user_guide/benchmarking_with_inputs.html#benchmarking-with-a-range-of-values Title: Benchmarking With Inputs - Criterion.rs Documentation has changed since the last benchmark run where does it store stuff? it doesn't store anything bbl cya How Should I Run Criterion.rs Benchmarks In A CI Pipeline? You probably shouldn't lol glhf only src/contract/dao/ can create a csv file if you uncomment some line of code https://darkrenaissance.github.io/darkfi/dev/bench.html Title: Benchmark - The DarkFi Book ok afk btw k=18 uses >16GB of ram lol dasman: commit bot is down btw haumea: when you are back please check b6c7b5ff3cb44cb2685705d474a12d92dda72434 haumea: Yeah criterion is nice haumea: apart from the deprecation comment, using just the funcid and not a prefix can lead to collisions Oh wow, all nodes are down [WARN] No connections for 81035s. Refinery paused. and keep connecting to the same ip/diff port haumea: don't you mean this: tau SjJ2OA ? : hihi : @skoupidi pushed 2 commits to master: 4d74247c9f: drk: fixed erroneous eprintlns : @skoupidi pushed 2 commits to master: 7e6a9a937b: script/research/gg: GenerateTx added to create genesis mint txs : @draoi pushed 1 commit to master: afde25dd1d: store: don't store whitelist entries in greylist on stop()... : @skoupidi pushed 1 commit to master: 2956207cc5: chore: clippy : @skoupidi pushed 1 commit to master: 4b12c9de4f: chore: clippy : @skoupidi pushed 3 commits to master: 3fc0fb19ed: minerd: use (de)serialize_async b test test back upgrayedd: check darkfi/src/contract/money/proof/token_mint_v1.zk:30 this is how the token_id is calculated now ACTION adding criterion.rs to dev-depends ^ it only impacts benchmarks, the core code remains unchanged. it's a unit test framework brawndo: fyi hanje-san: oh ok so the user info gets into user_data as the poseidon hash of their public key x,y https://bheisler.github.io/criterion.rs/book/user_guide/command_line_output.html#change Title: Command-Line Output - Criterion.rs Documentation upgrayedd: for the canonical token_id derivation (including user_data), see darkfi/src/contract/money/proof/auth_token_mint_v1.zk:42 indeed the mint key (x, y) Sounds good still a prefix is needed to avoid collisions no? check native token for example: DARK_TOKEN_ID = TokenId::from(poseidon_hash([*TOKEN_ID_PREFIX, pallas::Base::zero(), pallas::Base::from(42)])); while a normal token is: let token_id = poseidon_hash([func_id, user_data, blind]); you shouldn't get collisions if the blind is properly generated. that's like a prefix (it has a dual use) unless you mean adding a field which says "this is a token_id", we could do that but we aren't doing it anywhere else (it's like type safety for cryptographic commitments) yeah that what the prefix is for, to showcase that this is a token id otherwise since only auth/user knows the blind/user data, can't they generate a token_id thats equal to the native one? therefore have control over native mint they cannot since the DARK_TOKEN_ID cannot be minted. in fact it should just be an invalid pubkey wdym dark_token_id cannot be minted? PoWRewards mints them i mean the derivation should be changed so it's an EC point created by hash to curve so yes it's incorrect since we don't use the derivation for DARK_TOKEN_ID, it's just a constant oh ok so also make the native token follow same mint auth, but using an "invalid" one nobody can use? yep aha I thought I was going crazy lol hash_to_curve("foo") lol no it's good ty :D i just made the change locally... running unit tests and will commit if all good noice : @zero pushed 1 commit to master: f8f446f916: money: change DARK_TOKEN_ID = hash_to_base("DarkFi:DRK_Native_Token") BLOCK_HASH_DOMAIN is unused, should we remove this? : @zero pushed 1 commit to master: afa1856236: tests/bench: delete bench_zk, add zk_arith (uses criterion crate) fix'd btw we can benchmark async functions https://bheisler.github.io/criterion.rs/book/user_guide/benchmarking_async.html Title: Benchmarking async functions - Criterion.rs Documentation dasman I was able to connect after I increased my outbound connection limit. but yeah the greylist is malfunctioning a bit I think increasing the limit helps with throughput of pruning bad greylist entries but somewhere in the network I think bad greylist entries are being sent back-and-forth through the network and aren't pruned fast enough I spent some hours the past couple of days reading through the design and implementation and trying to think of a solution I think adding a bit of logic to delay retrying on an IP address we've recently tried could be helpful (i.e. don't retry on different ports in a short time window) I'm still pretty much convinced it's outbound connections being stored, because recently a node of mine discovered 167 addresses all I could see was my own ip with different ports What should work best at least for now is to set outbound connections to 0 and connect to peers manually peers = ["tcp+tls://xeno.tools:26661", "tcp+tls://anon-fore.st:26661", "tcp+tls://dasman.xyz:26661"] And comment out seeds ok trying that now it's an attack i don't recommend testing rn i'm working on a fix will push 2m the attack is: anyone can fill their greylist with garbage info and this will get propagated by honest nodes i have a solution, will push asap with documentation in the commit that explains everything s/greylist/hostlist draoi: thanks a lot draoi sounds good lmk if I can help with testing later If I run ircd (it's a daemon), and use weechat as a client, am I adding a routing node to the network or am I just connecting to the network? can other people use my ircd in other words? loopr: not if you set irc_listen to a public ip (or resolvable address) and share it otherwise you're safe ah ok but the listen address is the only item blocking my node from the public corret? so if I wanted to add a _public_ node to the network (I am assuming this helps for stability?) I would just change the irc_listen to a public ip and voila? yeah you usually set it to localhost, so it runs locally on your machine cool thanks dasman and yes, but ircd is pretty much stable plus darkirc should be coming up soon depricating ircd no problemo sir ah ok then I'll wait out darkirc upgrayedd: played around with drk and stuff a bit so far, then wanted to run "drk... ... | cargo +nightly run --release -- generate-tx..." and that errors Error: Io(PermissionDenied) error: a bin target must be available for `cargo run` searched what that meant but dunno what to point it to loopr: the command is supposed to run inside script/research/gg, hence all the ../../ and ~/ paths aaaah ofc, that's what gg stands for gg stands for genesis generator I was running stuff from contrib/localnet/darkfid-single-node/ which has the same 3 levels of indentation... yeah got it now, thanks gg :D :D XD hi building darkirc now to test the new changes that's nifty, so that stuff generates a new genesis block for my very own wallet, no need for pre-minted addresses and stuff I like that well you still need to recompile darkfid, reset/restart the node(s), import wallet again and scan not a perfect solution, but gets the job done when you know what you are doing ya it's actually still generating the block...that takes quite some time well if you never runned cargo run before it needs to compile everything first I ran it first just to see what the generate-tx would do (without value), so that compiled it there already now it's just the actual task and it's also taking pretty long did you ever run make test in repo root to generate vks/pks caches? yeah I'm a bit unclear on how rust/cargo caching works. small changes to a codebase seem to generate a lot of logs that look like rebuilds not just in darkfi also what exactly did you run? yep observed that too like, when I just change something in a unit test, it still recompiles quite a few things > did you ever run make test in repo root to generate vks/pks caches? most probably yes (you might remember you suggested to do so some time back) send the command you executed obfuscating paths if necessary obviously ../../../drk -c ../../../contrib/localnet/darkfid-single-node/drk.toml --wallet-path ../../../contrib/localnet/darkfid-single-node/drk/wallet.db wallet --secrets | head -n1 | cargo +nightly run --release -- generate-tx 900 I got a large string as output but this is not the genesis block? in your instructions you added ... generate-tx 10.54 > genesis_txs/test && cargo +nightly run --release -- generate > test_genesis_block_localnet no thats not the genesis block thats the encoded tx for the genesis mint of your addr aaah ofc that's why the rest > genesis_txs/test this puts it inside the folder the tool uses to gather txs to build the block ++ && cargo +nightly run --release -- generate this generates the block using the txs from that folder (you can use diff folder via arg) understood and > test_genesis_block_localnet outputs the stdin of previous command(the block generation) into a file called test_genesis_block_localnet yep so the full flow is: invoke drk to get wallet secrets -> pipe that to head to grab fist line, aka first wallet -> pipe that to geterate-tx 900, since it grabs the wallet secret to generate the tx from stdin -> store stdoutput to file -> invoke gg to generate block -> store stdoutput to file sorry if the paths become confusing, I'm just lazy and run everything from current folder you can script that very easily, since invoked commands use stdin/stdout if you prefer yeah that's why I wanted to do every step on its own, to understand what it's doing how much is 1000 drk? how much is a tx? how long last 1000 drk? 1) What in localnet they are all the same question really I understood 0/3 If I am generating a new genesis block with 1000 drk for my wallet, how many txs can I do with those 1000 drk? I don't remember if fees are enabled in current darkfid build but in general, a lot Ah ok so no need to generate 1e9, or 1e12 or so nah I don't see the point ++ upgrayedd do you know much about how sled works? HCF: I believe I have a good understanding yeah, why? I'm taking a look at the TODO in src/irc/client.rs about pruning the 'seen' items from IRC I think a good solution is to add a timestamp value alongside the event_id in `fn mark_seen()` and prune when either the length of the tree is long or after a time. but if we do that we probably want to prune older msgs first so I'm just wondering if there's a different/better way to get the time, like if there's a way to retrive a timestamp from sled itself. not seeing anything in docs though first of all I haven't take a lool lately on darkirc code so I don't know exactly what is going on there but anyway since its tied to a dag, pruning can happen on each tree reset no need to bother with sled, other that dropping the keys OK I'll check that it's there. if that's already being done I'll delete the TODO I don't see a remove_seen fn or something so its not there ++ going afk, glhf everyone o/ \/ ok,so dockers are built for aarch64 as well, no alpine, though it worked before, udev libs in conflict.. no amazon linux as it's more like rhel7 and it needs some fiddling to get sqlcipher and wabt there re - AI for SW development - it might come as a surprise to some, but it seems not laughable anymore Devin correctly resolves 13.86%* of the issues end-to-end, far exceeding the previous state-of-the-art.. https://www.cognition-labs.com/introducing-devin Title: Introducing Devin, the first AI software engineer https://techcrunch.com/2024/03/20/githubs-latest-ai-tool-that-can-automatically-fix-code-vulnerabilities Title: GitHub's latest AI tool can automatically fix code vulnerabilities | TechCrunch but again, as I said before, I came here to sync efforts, not to argue gm HCF: It just has to be deleted daily HCF: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/darkirc/src/main.rs#L197 Title: darkfi/bin/darkirc/src/main.rs at master - darkrenaissance/darkfi - Codeberg.org https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/event_graph/mod.rs#L124 Title: darkfi/src/event_graph/mod.rs at master - darkrenaissance/darkfi - Codeberg.org https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/event_graph/mod.rs#L162-L174 Title: darkfi/src/event_graph/mod.rs at master - darkrenaissance/darkfi - Codeberg.org It's left as a TODO because I don't have an idea where the pruning should take place in darkirc I was first thinking that perhaps EventGraph should also take a slice of sled::Tree in order to prune them alongside the main graph But that didn't seem good after I implemented it and read it hey o/ : @draoi pushed 1 commit to master: 4bad13e687: net: create `darklist` for unknown transports + share darklist (not greylist)... HCF, dasman: checkout the commit msg have done some small local testing and seems to work fine, gna try it out now in the wild there may still be other bugs etc, like the hanging CTRL-C and potentially what dasman was reporting wrt storing nodes- should be easier to catch tho now ah noting this update does require a hostlist refresh (delete your hostlist before re-running) : gm : network should self-cleanse from all the garbage once a sufficient amount of nodes update i mean it doesn't "require" it, like nothing bad will happen if you don't, but the old hostlist will be useless since the format changed : @zero pushed 4 commits to master: 4c049778bb: sdk/crypto: FieldElemAsStr trait which provides to/from_str() for Fp/Fq : @zero pushed 4 commits to master: a1c48a39c7: zk/debug: add import_witness_json() : @zero pushed 4 commits to master: 4532b8d229: for most .zk proofs, provide a corresponding witness.json file which is usable with zkrunner, benchmarks and other utils (using import_witness_json()). : @zero pushed 4 commits to master: 19016fb521: bench: add generic zk_from_json() which will benchmark most .zk files using witness.json files provided. Fp/Fq already implement Display You just need to strip the 0x yeah i copied that code Also: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/sdk/src/crypto/mod.rs#L80-L124 Title: darkfi/src/sdk/src/crypto/mod.rs at master - darkrenaissance/darkfi - Codeberg.org You don't need the to_str() function Also it's not a str but a String Use the Rust traits There is TryFrom ah great, so .to_string() converts bytes to hex now? Fp::from(1).to_string() gives 0x0000000........1 yeah that's using Display pub trait FieldElemAsStr: PrimeField { fn to_str(&self) -> String { This is unnecessary And wrongly named from_str also should be traited ok thanks TryFrom i don't think that's allowed implementing an external trait for an external type You can wrap it Why do you need the hex stuff btw? see import_witness_json() in zk/debug.rs ah so because of python? actually for benchmarks see bench/zk_from_json.rs aha ok also all the proofs now, have a corresponding witness.json file, see */proof/witness/ Yeah I'd prefer if you prefixed it with 0x tbh so you can easily import them for debugging .etc That will match the pasta crate Display, and it's easy to strip_prefix when doing FromStr ok i can do that, then in from_str() check the first 2 bytes == 0x https://doc.rust-lang.org/std/primitive.str.html#method.strip_prefix Title: str - Rust You'll have an option >If the string does not start with prefix, returns None. nice ty i cannot find how to implement TryFrom on Fp, i think this trait has to stay https://stackoverflow.com/questions/37904628/implement-stdconvertfrom-on-type-from-another-crate Title: rust - Implement std::convert::From on type from another crate - Stack Overflow although that's old You're supposed to wrap it (unfortunately) you mean wrap Fp into Fp2? Yeah so like we have Coin or DaoBulla struct Foo(Fp) Fp2(val).try_into()? like this? seems weird, maybe i can try specify trait FieldElemAsStr : TryFrom<&str> I mean I guess it's fine like this too There's likely a reason they didn't implement this in the pasta crate, but I can't see it most likely just forgot or didn't want to Probably they just want to you deal with encoding the bytes yourself So you have the repr() and you'd have to hex it yourself And then Display being a debug-only thing It is more correct, but annoying loopr: much better now, but this function should be left blank: fn wif_prefix(&self) -> u8; and you must check the prefix is correct when decoding also can you remove verify_checksum()? it should be called by from_wif() on the *decoded* bytes so remove #1 from verify_checksum(), and then call it inside from_wif() also you don't need an individual test for each function, just make a single one testing all of them in sequence also in verify_checksum() this is bad: match bs58::decode(&darkfi_serial::serialize(self)[1..]).into_vec() { it's wrong. i'm not even sure what it means because the comment doesn't match the code seems like you think there Self = String, and then you are serializing the String which adds a prefix byte for the length so you remove that, then deserialize the base58 yet in the rest of the impl, Self is a concrete type so it's inconsistent let wif = WIF::to_wif(&pk); assert_eq!(wif.verify_checksum(), true); no you have: fn from_wif(wif: String) -> Result; so Self is a type T, but then your verify_checksum() assumes Self = String : @zero pushed 1 commit to master: b1ba95b9e0: bench: correct paths and add missing EcNiPoint to import_witness_json() gm I was taking a look at the TODO in the darkirc code that mentions pruning the 'seen' msgs upgrayedd mentioned it would probably be a good idea to prune it whenver the Dag itself is pruned to do that, does it make sense to create a StoppableTask or Subscribed for pruning sled? or is that overkill? basically I'm unclear how to notify that a Dag prune has occurred to then trigger the sled pruning for 'seen' msgs What I was saying, there is already a backround task running that does this https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/event_graph/mod.rs#L480 This one Title: darkfi/src/event_graph/mod.rs at master - darkrenaissance/darkfi - Codeberg.org ok i might have missed your msgs. my set-up doesn't always have ircd running I thought of "solving" the TODO by modifying this function to accept an arbitrary number of sled trees that would also get cleared But not sure if that is a proper solution, it feels like a plug-in yeah I'm unsure too. I like how the event_graph is separate from darkirc right now that's why I was thinking of creating a new Task or Subscriber or something so we don't make a strange exception for this one case Yeah Although it'll be hard to sync them up So perhaps yeah, the eventgraph pruning task can push a notification upon pruning yeah I don't know if they need to be totally synced for this one example And we can subscribe to that If they can it would be prefered Also just triggering it rather than polling seems better yeah for sure. especially if we need this solution more than once by polling you mean Subscribe/notify code? I didn't look into how it's implemented deeply No sorry, by polling I meant another looping prune task I think the darkirc task can just wait for notifications on when to prune the "seen" tree ++ yeah polling seems like not a good approach if we can avoid it idd thanks for the input. I'll go back into the code and see what I can do . it's sort of a nice small way to learn how the StoppableTasks work like I think I understand them conceptually but have not worked much with Futures Thanks a lot :) So essentially I think you need to add a notification mechanism in the eventgraph prune thing ++ And then anyone should be able to subscribe to it Yeah this is the better solution :) would that be a JsonSubscription or a system::Subscription? The latter ok also I sent an SSH key for codeberg in a priv msg, did you (or someone else) see that and add it? again I probably missed a reply. sorry for that I added the sighup account to the codeberg organisation : @draoi pushed 1 commit to master: 10404a962e: doc: fix various hosts documentation https://codeberg.org/sighup Title: sighup - Codeberg.org perfect draoi darkirc seems better now, not getting the spam like before good there will still be nodes sharing hostile lists before the network has fully updated, but they will no longer be shared by honest nodes (providing they have updated) yeah I guess that will always happen back later o/ o/ me too, gna go lift heavy things cya hanje-san: We're done with the zkvm, will push soon likely draoi: excellent <3 great news, looking forward to it i want to move on from dao soon and finish drk, but also interested to deep dive more into benchmarks/cost analysis : @therealyingtong pushed 2 commits to master: ce35921cab: zk::vm: Refactor range checks to reuse table : @therealyingtong pushed 2 commits to master: b0df9d5f38: chore: Clippy lints hanje-san: ^ Now the sinsemilla lookup table is also reused in the range check chips wonderful news Which also means no need for NUM_WINDOWS const // Domain prefix used for block hashes, with `hash_to_curve`. pub const BLOCK_HASH_DOMAIN: &str = "DarkFi:Block"; btw can we delete this? it's unused : @parazyd pushed 1 commit to master: 19b0325dd9: contract/test-harness: Update PKS and VKS cache hashes That was used for hashing the blockhash into a point In PoS hanje-san: The witness json files get changed when I run tests This is extremely annoying they shouldn't do https://termbin.com/w4at This happened after running `make test` in money just now are you on latest master? ah my bad i forgot to comment those lines... will fix it later Yes as you can see I pushed commits darkfi/src/contract/dao/src/client/exec.rs:117 it should be commented like this i need to regen them anyway since now we want the 0x prefix, then when i commit that i'll comment this ok thanks np if in rust i want to do sth like #ifdef GEN_WITNESS, would i use a feature flag for that? Yes ok nice But we run things with --all-features usually So you might want to do a negation ah yeah ok so not(not_gen_witness) Yeah it keeps the makefiles simple now i finally understood why we use double negation everywhere :D it's so confusing for a peabrain like me, i have to read it out loud 2-3 times https://parazyd.org/pub/dev/random/states-of-a-programmer.png I'll bbl, gonna get some air enjoy rightfully earned a daily bag of sun woah i'm like in a superposition of those two states inside me there are two wolves hanje-san: hanje-san │ loopr: much better now, but this function should be left blank: you mean each implementation has its own prefix? fyi: https://github.com/darkrenaissance/darkfi/pull/256 Title: Fix dev link to dev section by holisticode · Pull Request #256 · darkrenaissance/darkfi · GitHub hanje-san │ and you must check the prefix is correct when decoding do I? I mean, doesn't the checksum already take care of that? hanje-san │ seems like you think there Self = String no I don't, but Self is a , so I use `serialize` to get a vector from T otherwise, how am I going to get a vector of something I don't know what it is? that was the whole idea behind using Encodable/Decodable, so one blanket implementation covers a lot of different types nvm, after moving verify_checksum() out of the WIF it became clearer and it's now fixed if I wanted to generate some amount of transactions, can I just use TestHarness::transfer to create txs, and then execute them in a batch? Or should I use a different pattern? : draoi: which seeds are you using? where is this mirrored from/to? loopr: darkirc dasman ++ loopr: commit messages like > first iteration should be avoided, see our commit log also you should move verify_checksum() back inside the impl, but keep the function the same (without &self) i think it's better to do pub trait WIF: Encodable + Decodable { ... } and just move the functions all inside the trait fn wif_prefix(&self) -> u8 { 80 } <--- remove this! in from_wif() you aren't checking the prefix is correct also in to_wif(), your numbers in the comments are 1, 2, 5, 6 ... you should just delete the numbers so for the commit messages, can you use git rebase -i (lookup usage), and squash all the commits into a single one and rename them to sth meaningful with a description of what's being added and what it does then git force push to your branch --- brawndo: Fp.to_string() doesn't exist and that code is for Debug, not Display. Are you sure it's correct to make a conversion function rely on that? for now I'll add "0x", and if we want to use that then i can change it to return format!("{:?}", self) : @zero pushed 1 commit to master: 61661052ce: sdk/util: rename Fp.to_str() to Fp.to_string(), and encode/deocde hex strings with 0x prefix. : @zero pushed 1 commit to master: cb80b9a69f: money/dao: regen zk witness json files, and comment out zk::export_witness() from money clients now you can do: [~/src/darkfi]$ ./bin/zkrunner/zkrender.py -w src/contract/dao/proof/witness/exec.json src/contract/dao/proof/exec.zk /tmp/circ.png : gm : gm : dasman: i'm using these seeds: seeds = ["tcp+tls://anon-fore.st:5262", "tcp+tls://xeno.tools:5262"] : @zero pushed 2 commits to daosmt: 1cb0b10772: dao::propose: add SMT for benchmark (improper impl) : @zero pushed 2 commits to daosmt: 6b999d5906: dao::vote: add SMT for benchmark (improper impl) ^ this is on a branch, so can run benchmarks on various machines : @zero pushed 1 commit to daosmt: 11c121a6fd: DAO propose/vote: update witness json : @zero pushed 1 commit to master: b2d29aaa0a: sdk/python: add missing SparseMerklePath gm been testing nymvpn, it's in alpha/beta state, works for apps like tg or signal but for ircd I can't make a connection, dunno why gm hanje-san: I suppose it doesn't matter ++ : @zero pushed 1 commit to master: 03ca4794eb: bench: simplify and improve zk-from-json benchmark : @zero pushed 1 commit to daosmt: e7ae5eb6d8: bench: simplify and improve zk-from-json benchmark : @zero pushed 1 commit to master: 79b6276fc8: Makefile: add missing dependency 'contracts' to bench target and rm src/contract/test-harness/*.bin : @zero pushed 1 commit to daosmt: 018425006b: Makefile: add missing dependency 'contracts' to bench target and rm src/contract/test-harness/*.bin hanje-san: Use `rm -f`, otherwise it'll be an error if the file doesn't exist : @zero pushed 1 commit to master: dd5c4d747f: Makefile/bench: add -f to rm so nonexistent files don't block target : @zero pushed 1 commit to daosmt: b07ce229db: Makefile/bench: add -f to rm so nonexistent files don't block target hanje-san: Also don't use underscores in makefile targets, dash (-) would be preferred ok thanks hanje-san: And finally add the non-file targets to PHONY (Bottom of the file) : @zero pushed 1 commit to master: 452a6740f2: Makefile: s/bench_zk-from-json/bench-zk-from-json/ and add bench* to PHONY : @zero pushed 1 commit to daosmt: 6fd86856bc: Makefile: s/bench_zk-from-json/bench-zk-from-json/ and add bench* to PHONY done ty ty btw PHONY is used to mark targets that aren't expected to be an actual file Everything else a Makefile will always expect to result in a file with the target's name PHONY ensures they will always run gr8 got it https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html Title: Phony Targets (GNU make) my benchmark script: https://agorism.dev/uploads/run_bench.sh oh ffs forget sth, one sec Are the changes I made to halo2 enough for benching? : @zero pushed 1 commit to daosmt: e1da305960: Makefile: change baseline from master to daosmt (for daosmt branch) we will see, i will make python bindings and implement the cost function from halo2 in python, then make a tool for analysis ok now the script should work cargo bench -- --load-baseline daosmt --baseline master then after this will output the comparison Cool yeah the benchmark framework is nice, it does repeated sampling until the variation is below a significance level (using statistical hypothesis testing) so you don't need to worry about not using the computer at the same time. the results are guaranteed to be accurate and then when doing the comparison, it always shows you if the performance increase (or decrease) is statistically relevant so it's very accurate and reliable also it 'warms up' the tests lol by doing initial runs Yeah I used it before ah nice https://github.com/parazyd/kyber-kem/blob/master/benches/ops.rs Title: kyber-kem/benches/ops.rs at master · parazyd/kyber-kem · GitHub nice, you can make an benchmark group for those params then it draws you a graph https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bench/zk_arith.rs#L56 Title: darkfi/bench/zk_arith.rs at master - darkrenaissance/darkfi - Codeberg.org : test test back : @zero pushed 1 commit to master: edf36acc88: Cargo.toml: add bench=false to disable libtest benchmarker : @zero pushed 1 commit to master: 5bfd718575: doc: add desktop nullifier/SMT benchmarks comparison : @zero pushed 1 commit to master: 2527dd0812: doc: add laptop nullifier vs SMT benchmarks brawndo: https://darkrenaissance.github.io/darkfi/dev/bench.html Title: Benchmark - The DarkFi Book nullifier < SMT < nullifier + money::transfer() call (in terms of speed) so SMT offers much benefits, and it's faster (when considering nullifier combined with moving the coin for anonymity) so tmrw i will begin implementing SMT into dao Thanks I'll have a look ++ Speed is not the only thing though You should look at proof/tx sizes, and wasm gas usage wasm gas usage is the same, but will look at the proof size Also make sure the wasm host functions you added account for all the gas (those ones are not automatic) Impossible that it's the same How so? because the SMT computation only happens inside ZK, in fact we remove code from DAO unless you mean money, in which case it's a small impact on adding nullifiers to the tree but i can benchmark that Yeah pls review it And if you don't mind, clean up runtime/import/smt.rs a bit Code styling can be improved (It's sensible to have blank lines separating things sometimes) btw I think nullifiers are slow mainly because we hold them in a vec rather than a tree That's a free optimization we can make With how many nullifiers are you testing anyway? ACTION wouldn't try with less than 1M ok will cleanup smt more, i need to add more comments in general i can see your code is well commented 'like a boomer' wdym holding in a vec? we use the sled DB which is a b-tree, although the fastest is a hash-set (which sled doesn't have) sec nullifiers aren't slow, it's just the SMT avoids us having to make a concurrent money::transfer() call which involves an extra signature + 1 input proof + 1 output proof + extra wasm call (which is fast tho in general) oh sry yeah I got confused by something It used to be in a vec before about nullifiers being slow, the main concern is overhead in money::transfer() since we have to calculate 256 extra hashes and writing the nodes to DB Gotcha it's a tough decision here since generally we prefer offloading work to wallet, but here SMT is conceptually cleaner and might be faster even i'm a bit impatient tbh to get moving and close this, but don't want to make the wrong decision either since we then have to live with it will look at: SMT vs dao nullifier proof sizes, analysis of overhead for SMT in money::transfer() Yeah we should likely start looking into the zkvm layout optimisation too incrementing k is logarithmic tho, so k=14 is better than 2 k=13 proofs Because once that changes, it will invalidate old proofs we should add version to zkas for that Although we have zkas versioning, but still it'll become double-maintenance There's binary version in zkas yep, we can just move things into sub-trees and delete after some time people can checkout old versions to access old compilers > hanje-san | also you should move verify_checksum() back inside the impl, but keep the function the same (without &self) https://darkrenaissance.github.io/darkfi/zkas/bincode.html#binary_version Title: Bincode - The DarkFi Book nice afaik private methods are not allowed in trait implementations? SMT is a single k=14 proof, whereas dao nullifier + money::transfer() is 3 k=13 proofs > hanje-san | in from_wif() you aren't checking the prefix is correct loopr: well you can put it outside the trait, but not in the impl block for Encodable in fact there should be no impl block at all you may have missed my earlier comment, why is this required? doesn't the checksum check already take care of that? no checksum checks the integrity of the data but you aren't checking the wif is for this type we are decoding ah you're right, during the checksum check I think we strip the prefix ok ++ last thing I still don't get each type should indicate support for Wif impl Wif for Foo { fn wif_prefix() -> u8 { 110 } } last thing I still don't get hanje-san | fn wif_prefix(&self) -> u8 { 80 } <--- remove this! pub trait Wif : Encodable + Decodable { ... } who provides the prefix then yes delete the entire impl block impl Wif for Foo { fn wif_prefix() -> u8 { 110 } } plz read the rust book chapter on traits >SMT is a single k=14 proof, whereas dao nullifier + money::transfer() is 3 k=13 proofs actually I did, but reading without applying isn't same effective Solid yeah yeah surprising huh? loopr: its the same as C++ virtual method virtual void foo() = 0; hanje-san | impl Wif for Foo { fn wif_prefix() -> u8 { 110 } } you cannot use the type which inherits Wif, unless that type also implements the foo() method how is that different to what is there now your wif_prefix() is completely useless every type has a prefix of 80 can you go on libera IRC and join ##rust? aaaah so every type will have its own prefix? yes ofc wasn't clear to me plz join libera IRC, and go to ##rust it's like a free rust 24/7 support channel THEY DO IT FOR FREE yeah haha use them ok so I can add the libera server right from this same client I assume Yep ++ https://libera.chat/ Title: Libera Chat | A next-generation IRC network for FOSS projects collaboration! ##math, #ethereum, #maemo-leste, #cyberpunk, #electrum, #monero-* #bitcoin-* <- some channels for your amusement yeah I have been there before use /list to view also lainchan #programming is pretty good before people forced me to use slack,discord,tg and whatnot so it was too much :) i have a special work computer for all that, it's always a hassle turning it on IRC = fun ++ btw what happened to freenode libera oh that's what it became ++ hanje-san: sorry for my rookie slowness, but afaics I can't move the impl into a default trait implementation in the default implementation I don't know how to convert self into a byte array / vec in order to concat and then hash or what am I missing that's what `serialize` was doing for me with impl are the tor peers in the default darkirc_config.toml expected to be live? can't connect to darkirc via the tor transport : @foo pushed 1 commit to master: 1383e7bd47: doc: Remove reference to quarantine settings... HCF I recall having issues connecting to tor nodes because one setting about the tor transports in the default toml wasn't right. Thawt was for ircd though, may not apply to darkirc hey HCF try with these two: tor://6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion:25551 tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25554 hi which seeds should i use for darkirc loopr: copy my text and make notes cos i keep spelling things for you but you don't listen > pub trait Wif : Encodable + Decodable { ... } : gm : brb : b seeds = ["tcp+tls://anon-fore.st:5262", "tcp+tls://xeno.tools:5262"] nice sled logo https://docs.rs/sled/latest/sled/index.html Title: sled - Rust i send both since once accepts only ipv6 conns, the other ipv4, some server issue ok should i upgrade to those? probably, what seeds are you using now? but yes you should upgrade ohsnap, d4sman, and a tor one ok i dnt know about them you can try and see if the old ones, i think ohshap did some reconfiguration of ports recently so might not work dsman not sure of upgrade status etc tor i have no idea there were also 2 tor seeds/ peers/ addrs linked above configured the above 2 addrs as peers and they're working fine brb : @zero pushed 1 commit to master: 45f0e1ab5e: bench: add sled benchmark SLED writes on laptop=470ns, desktop=261ns for 32 bytes (starting DB size is 1M keys, we delete the key after every insert) insertions are constant size, so no different between DB size of 100k vs 1M *const time overhead for money is 256 insertions vs 1 insertion, but 256 insertions of 32 bytes is simply laptop=120μs and desktop=67μs verifying signatures or ZK proofs has a much larger overhead : @zero pushed 1 commit to master: 031aac7f65: fee/swap: migrate nullifiers to SMT to match money::transfer() i added those darkirc seeds 09:13:59 [ERROR] [P2P] Channel send error for []: IO error: connection reset that's my own external_addr, why do i get that message? 09:15:46 [WARN] [P2P] Failure contacting seed #1 [tcp+tls://xeno.tools:5262]: IO error: timed out 09:15:46 [WARN] [P2P] Greylist empty after seeding 09:15:46 [INFO] [P2P] Seeding hosts successful. bc you ping yourself before sending ur addr to the seed yeah but why would it fail? oh wait the port is wrong why does it use that port? i have external_addrs=["tcp+tls://[2a02:aa13:8342:1400:81b3:9662:6650:690d]:26661"] it's probably the inbound port of ur connection to yourself disconnecting which is expected behavior 09:15:30 [INFO] [P2P] Connected Inbound #0 [tcp+tls://[2a02:aa13:8342:1400:81b3:9662:6650:690d]:37580] 09:15:30 [ERROR] [P2P] Read error on channel tcp+tls://[2a02:aa13:8342:1400:81b3:9662:6650:690d]:37580: IO error: connection reset is that because i disconnect from myself? yes it's normal and part of how the channel stuff gets cleaned up ok well normal behaviour shouldn't display errors but nbd for now, but should be fixed eventually i can't connect to any seeds it displays the same error since it's using the standard net code for channels etc i mean i connect to anon fore but it has no hosts to share Error: DagSyncFailed weird, that's wrong so i end up here 09:15:46 [INFO] [P2P] Disconnecting from seed #0 [tcp+tls://anon-fore.st:5262] 09:15:46 [WARN] [P2P] Greylist empty after seeding 09:15:46 [INFO] [P2P] Seeding hosts successful. also why does darkirc IRC keep disconnecting and reconnecting? that's really annoying, it does it like 5-6 times i need to see the full log output since i get this on seed ffs can't copy paste from server it says 'Sending 4 addrs to...' possibly you filtered out all the addrs, i need to know why i don't know re: darkIRC, i get that too https://agorism.dev/uploads/output.txt can you run w -vv $ ./darkirc -vv &> /tmp/output.txt && doomp /tmp/output.txt ty fuck my server was killed trying to compile darkirc yikes https://agorism.dev/uploads/output.txt view it with less -R output.txt 09:25:44 [DEBUG] (26) net::hosts::insert(): Filtered out all addresses btw did you delete your hostlist before re-running? no ok will do still the same issue ok i need to add some debug statements 09:31:35 [WARN] [P2P] Greylist empty after seeding fyi this node doesn't support tor? no i can only connect to anon-fore yes but i can see it storing tor addrs in its dark list which means it's an unsupported transport should i enable tor? nbd i should be able to at least connect to the seed nodes or at least anonfore yes ofc you are connecting fine fyi the issue is that ur node is filtering out all the addrs it recvs i need to add more debug statements to understand why, since the only thing i can see from this log is that it put a tor node on the dark list can you run again with trace -vvv (saves you having to recompile) $ rm -fr ~/.local/darkfi/darkirc* && ./darkirc -vvv &> /tmp/output.txt || doomp /tmp/output.txt ty gm that's not trace or maybe i have the prev log gm https://agorism.dev/uploads/output.txt ok good ok this is a bug nice catch : @draoi pushed 1 commit to master: 1c5586cf6e: store: fix bug in filter_addrs() that was causing peers with the same ports as us to get dropped hanje-san: make clippy is returning some errors ^ your darkirc node connects now fixed : @zero pushed 1 commit to master: ef71a31ffe: sdk: fix broken unit test wonderful, it works! i'm connected : yoooo why does dnet not show any packets for me? oh wait it does nvm i only have one connection to anon fore.st, i guess this network is so small : hey :D yeah it's small + the bug i just fixed may have stopped nodes from propagating properly i tried upgrading my server but it's not connecting i need to update my servers too what's the connection issue? updating now, should lead to better network topology it simply won't connect to any seed connection refused? 11:28:33 [WARN] [P2P] Failure contacting seed #0 [tcp+tls://anon-fore.st:5262]: IO error: timed out same for xeno.tools seed? there's some firewall issues on my servers i think 11:29:03 [INFO] [EVENTGRAPH] Syncing DAG from 0 peers... maybe DAG should not try to sync when there's 0 peers, it should pause 11:29:23 [WARN] [P2P] Failure contacting seed #1 [tcp+tls://xeno.tools:5262]: IO error: timed out weird could be an issue with my servers rather than net, hard to tell i'm updated now anyway anon-fore works locally but not on my server yeah possibly to do with ipv6 can you try this seed: "tcp+tls://ohsnap.oops.wtf:25553" s/updated/updating 11:33:16 [INFO] [P2P] Connected seed #2 [tcp+tls://ohsnap.oops.wtf:25553] 11:33:16 [INFO] [P2P] Disconnecting from seed #2 [tcp+tls://ohsnap.oops.wtf:25553] 11:33:16 [INFO] [P2P] Seeding hosts successful. ohsnap works i think ipv4/ ipv6 firewall issue on my servers brb ash: here? any news on TxPipe.io? Hi trying to connect to testnet. is lilith the best seeds? hanje-san: Fede told me that it would be good to organize a meeting to see how they can help hanje-san: They said to me to wait bit until they organize though. I want to contrib to the documentation, just open a PR? hanje-san: you're right, apologies, I was listening but not paying attention to detail PR should be better now what's the guideline regarding adding new errors? from_wif can return a checksum error or a wif_prefix error; currently returning DecodeError with distinct strings the tests would benefit from not having to check the error string, but it's a minor thing I guess loopr: just add the error, WifDecodeError is sufficient hanje-san: Hi! hihi loser: upgrayedd will know but he's afk until monday *she I have question The proof of membership over the merkle tree is just single use? Or I can send multiple proof without using the nullifier? If so, how replay attacks are prevented. brb ash: i don't understand? wdym single use? : @zero pushed 1 commit to master: 37e642922b: doc: add section specifying DB formats for sets By proof of membership I mean a proof that states "I have a valid leaf of the merkle tree". so you have a ZK proof that some item C is in the tree R Once that proof is sent it becomes null? haha no ;) a ZK proof is always valid In this case the item C is my credential hanje-san: Sure, it is always valid, but if someone intercepts the proof can do a replay attack. So, to explain myself better, can I construct different proof without making my commitment null? The replay attack in this case is to intercept a proof and then impersonate the user using the same proof. no, you cannot do replay attacks on zk proofs for invalid witness values you need full knowledge of the witness values, there is no weakness in ZK itself (there might be a flaw in your scheme though, but i'd need to see it to know more) brawndo: when you're around, could you review/approve this doc? https://darkrenaissance.github.io/darkfi/arch/dao.html#tree-states-on-disk it's in the DAO section just randomly (will move it later). I have a couple of Qs there for you. Title: DAO - The DarkFi Book Hmmm, that means that I can construct differents proofs proving that some C is in R? Otherwise, if an attackers gets a valid proof of a user, can just copy-paste it and the verifier party would notice the difference wouldn't * correct usually you don't just prove there's some C in R, you *must* say other stuff about C like maybe C is a public key (or commits to a public key along with other values), and then you prove you know x such that P = xG (x here is the private key) a proof of ownership for x, is the same as a signature, which is equivalent to a ZK proof showing you know x such that P = xG Understood. But still I don't get how the replay attack is avoided. My feel is that one should construct different valid proof using a public input that changes. proofs* Like a counter, idk Or other dynamic value maybe you could try to describe what you're trying to do, maybe type up a document with your solution, and i will iterate with you I'm working on a document right now, but is very messy once I have it done I will share np, but i want to help you with your issue, but just hard to imagine rn since i have no context It is true Hmmm. I'm working on the zk-credential tutorial. So, the first thing is to authenticate as a user of a service (or a member of DAO could be too). My idea up now is to use the merkle tree to represent the set of active users. so give me an example of an action then active user = credential, so the merkle tree contains the list of credentials. A credential is a hash of several values (attributes) associated with that user. Exactly here's an action: posting on a forum. you must have a valid user account well lets start with 1. making an account One attribute could be the privileges of the user so that he can perform actions ++ ok good i create a credential C = hash(username, ...) and a ZK proof that it's constructed in such a way without revealing the attributes themselves. I give C and the proof to the forum owner who makes a signature using their forum public key. now C has a signature, all this data is posted on chain, and C is added to the merkle tree. ++ 2. now i want to post on the forum (well lets ignore username and say everyone is Anons) agree C = hash(P, ...) where P is a public key I prove (without revealing C), that C is in the tree R, and I know x such that P = xG (without revealing x or P), and that C = hash(..., P, ...) done any problems? So far so good Want to add something more? *ash is listens* *ash listens* well maybe now you want to delete the forum post or modify it when you posted the message M, you also added a keypair There the privilge part comes to play, so that C = hash(P, permission) it is ok? so now the ZK proof is that either you know q : Q = qG OR (you have a user_id >= 2 AND C in R AND P = xG AND C = hash(..., P, user_id, ...)) here Q is the random public key for M which allows editing it, and the : means such that the OR clause is what you're after (sysadmin editing your post) got it *nods* nice! feel free to shout Each time I want to authenticate as a user, I send proof that demonstrate that (1) C is in R; (2) that P = xG; (3) that P is in C. How we avoid to someone doing the replay attack. Normally the authentication is done by an encrypted channel I suppose, but supposing that wouldn't be the case and the attacker could intercept messages. Of course I can make my credential null each time I authenticate, and append a new one leaf to the MT. But in a blockchain that is a transaction fee. : @draoi pushed 1 commit to master: 6504ceceb7: net: fix bug in outbound session that was restricting slot connections... test crypto::util::test_fp_to_str ... FAILED fyi : gm darkirc usoooors : there was a bugfix. plz update. tysm <3 0w0 lol ok running test ash: first thing you say is correct, but then you say "how to stop replay atk" Maybe... I guess to avoid replay attacks on proof of ownership, I would have a circuit that proofs simultaneously that I know a x such that xG = P; and I can construct a public product P = private_key * other_private_number. you cannot take a ZK proof, and swap some part of it with another part so all that extra stuff you show inside the ZK proof *must* be part of the same ZK proof the important part is just C = hash(..., P, ...) and P = xG since nobody knows x *except* the owner of the credential note you could also do P = hash(x) too : @zero pushed 1 commit to master: f92fc9b096: sdk: add missing "0x" prefix to unit test strings ^ fixed When you say swap some part of it you mean the inputs? : darkirc works now, bam instant connect and DAG synced : amazin yeah that was an important fix ash: i mean if I have a proof that a = hash(x), b = hash(y), then if i want to make another proof that a = hash(x), c = hash(z), then i must know both x, z. I cannot use the first proof to construct the 2nd : upgrading my server too : ty : server doesn't connect : https://agorism.dev/uploads/darkirc_config.toml : ty : i'm running the same binary as on my computer : anon-fore and xeno-tools are timing out : oh wait : deleting hostslist : yeah i think it's to do with firewwalls on the server, idk tho i'm rebuilding those now : nope not working : lemme rebuild : fyi you didn't need to delete this hostlist this time Yes : i think you can do: : # iptables -L : to see if any firewall exists : (i might be wrong tho) : ty One cannot take one to make the other, that part I get it. But If one of that proofs perform an action, I can copy the proof that you constructed know x,y or x,z and to perform the same action. That's my point really. knowing* yes but the proof specifies the action Of course that one has to know the valid inputs to construct the proof, but once the proof is created a person can just copy it. And that actions is variable, there some uniqueness to it I suppose. darkfi/src/contract/dao/src/model.rs:122 check this, it's how the DAO works. You make a proposal to call some contracts. All of those actions are put into a single pallas::Base which is used inside ZK https://darkrenaissance.github.io/darkfi/spec/contract/dao/scheme.html#propose Title: Scheme - The DarkFi Book Each time that you create a proof there something regard it that changes. Maybe is a specific value of the action, an id, or nonce, idk Otherwise the proof would be the same and replayeable correct we should have functions in zkas ^^ then it would really help code reuse and structs Is very late here, I'm going to think about it this more and check the docs you shared. I appreaceate a lot the help and time <3 I learned, thank you gn! gn : i think my seeds were misconfigured since i was setting accept_addr to the external addr rather than inbound addr : have redeployed now, hopefully that fixes it : found another bug... fixing ... gn : ok lmk when to redep gm >we should have functions in zkas ^^ then it would really help code reuse Yeah they should be like macros actually so the compiler unrolls it hanje-san: I'll read the doc a bit later What are the questions? they are marked Q: in the text there's 2 but i'm interested in your general opinion ok going offline now, cya later afk : bug a lil tricky... needs thought : ok figured it out : but gtg afk for a bit so will probs push 2m hanje-san added the new error and edited the code accordingly gm maybe you want to elucidate on the generator constants task hey o/ loopr: I dm you sorry will check later again ash: are you sure you sent a dm to me? can't see any message vanished into the void possibly oh also question, what happens if someone sends a DM when my ircd/darkirc is not running? I am guessing it just gets lost? but I don't think that should happen, the dm should have been logged by others, right? actually not just dm but any kind of message yeah but only you can decrypt a dm to you yeah but the dm still exists, can't I retrieve the dm to decrypt it? depends if the daemon discards or not messages it can't decrypt I guess. don't know these details. airpods69: the DM still exists because it's stored in a buffer so you'll get it next time you open the chat ahh so if I were to stop the ircd process and you send me a dm while I am away, then I'll get it the next time I turn on the ircd process again? Got it yep, pretty sure that's how it should work neat, thanks for clearing that out np hmmm it could be that sometimes I switch between master and v0.4.1 so the config file changed loopr: loopr: forget it, the config file is in another folder separate folder Sent you another DM deki: hi hey ash Hope you doing great, I have a contribution to the docs, just send a PR? I'm doing alright thanks, hope you're well too. And yeah pretty sure it's just a PR same as usual I'm not part of the dev team btw, but from what I've seen, doc changes are PR too I'm fine too thank you :) :) Good, so I will open it. ash: still no see dm loopr: I have just send you another one hang on might be a config issue on my end can you share your pubkey again pls? yeah a sec nvm I got it gm gm draoi sup working on my internship project so that I can rest off tomorrow (sick here), what about you? working on a bugfix for darkirc feel better soon whats the bug? also thanks https://darkrenaissance.github.io/darkfi/arch/p2p-network.html#hostlist-filtering Title: P2P Network - The DarkFi Book basically we have an abstraction called HostState which keeps track of whether a host is pending, connected, moving between hostlists, etc when a channel disconnects a peer's host state can become invalid if the following thing happens in the wrong order: channel disconnects -> move_host to greylist -> reset state to none so i'm just moving some things around to ensure the atomicity of that process Does that thing happen in the wrong order though? (ig it happened thats why it is a bug?) very rarely but yes it can happen we have these subscribers that we use in p2p that many different processes can be listening to: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/system/subscriber.rs Title: darkfi/src/system/subscriber.rs at master - darkrenaissance/darkfi - Codeberg.org so in this case, different parts of the code base receive a stop signal for a channel that has disconnected, and so the code execution that follows can happen in different orders depending on which part of the code base receives the stop signal first so i'm aggregating what needs to be atomic into a single function, rather than it happening in seperate places where wires can potentially cross everything in order using a single function sounds like a good idea. but can't the "wires" be forced not to cross each other though? Would lead to the same result but with a bad solution in my opinion. (also take this for a pinch of salt from me, still learning) you can generally force atomicity with rwlocks/ mutexes etc, but we're trying to avoid overuse of those in the code we could also add a hoststate which is "Disconnecting" and try and force the order of execution via this State but i think the simpliest thing is just to do the channel cleanup atomically in a single function which we already have for this purpose: https://codeberg.org/darkrenaissance/darkfi/src/commit/f92fc9b0969c47bbee34c74e013055b30646a798/src/net/session/mod.rs#L49 Title: darkfi/src/net/session/mod.rs at f92fc9b0969c47bbee34c74e013055b30646a798 - darkrenaissance/darkfi - Codeberg.org ohh if a function already exists then code reuse is a no brainer. ++ yeah i think it's conceptually correct but it means we will add some conditional logic into remove_sub_on_stop() since it's code we do not do in all cases i.e. happens for outbound and manual sessions, not all sessions (previously was called in outbound and manual seperately, but that led to bug described above) there's a good overview of the p2p and main abstractions here: https://darkrenaissance.github.io/darkfi/learn/dchat/dchat.html Title: P2P API Tutorial - The DarkFi Book gm : y0 : sup : lil update incoming airpods69: only darkirc ^ (see mirror-bot) buffers offline messages, ircd does not : kk : @zero pushed 1 commit to master: bfcd383f3b: doc: on disk states, use (block_height, tx_index) tuple instead of tx_hash since it's 1. more compact 2. the info we actually need hanje-zoe: The doc is clear, looks good When you say "coin type", what does it mean? huh hanje-zoe now I am more confused... how does mirror-bot buffer offline messages? (lets supposed in dms, I dont think it is present there right?) : darkirc buffers offline msgs, not mirror bot : mirror bot just mirrors msgs from darkirc to ircd ah yea thats what I thought but now im wondering what hanje meant by that : he meant see the mirror bot for darkirc msgs also I'll read he p2p api tutorial in a bit that you sent earlier (fell asleep that time) : s/he/she : nice : fyi for anyone coming to darkirc and reading this msg log- i am replying to a person in ircd #dev via the mirror bot, probs shouldn't do that lol lol mirror-bot doesn't do it the other way around? I'll try again to get darkirc running as well. Attempt No. IDK lets see how it goes i think the last time the difficulty was related to tor (i think your tor setup), you could also try with a non-tor setup/ standard config i'm about to push some changes tho Yep last time it was my tor setup, gonna try in a vm so that I don't have to turn off ircd you don't need to turn off ircd, you can run both at the same time just configure different ports etc and run a seperate client if weechat, run like `weechat --dir [path_to_second_directory]` for darkirc that would be the irc_listen port in config right? I could have opened docs, my bad doing that ty, yes the port specified by irc_listen_port should be the same as your client brawndo: either coin (0) or nullifier (1) !topic Header::height should be u32 Added topic: Header::height should be u32 (by hanje-zoe) : @zero pushed 1 commit to tx_idx: 86f373667e: validator/runtime: add missing tx_idx upgrayedd: ^ lmk if good and i'll merge tmrw !topic add call_idx to env => remove from process() ix Added topic: add call_idx to env => remove from process() ix (by hanje-zoe) hanje-zoe: whats the usage of tx_idx? and index against what? insider the block? total txs? upgrayedd: hey, it's the index of the tx inside the block hanje-zoe: Yeah they should definitely be different dbs hanje-zoe: They can collide ok they can't collide because the type separates the rows i mean value index upgrayedd: needed for this https://darkrenaissance.github.io/darkfi/arch/dao.html#db-merkle-roots Title: DAO - The DarkFi Book ah you wouldn't index them by the field element yeah so c1, c2, n1, n2 would have indexes 0, 1, 2, 3 Well it also depends on your storage, if sql you can keep it all in one db and separate the tables as you see fit anyway it's the wallet so doesn't really matter tbh i can make 2 or 1 with little diff It would be nice if you can backup a single file And that being your wallet ah yeah we use sql, i could use sql for this yy actually having to write all the coins might be quite slow (into sql) this is all of the coins (not just ours), so for every new tx, we'd be writing to sql which is non-ideal anyway nbd, we can figure it out, the main one is the DB * Roots dbs hm yeah : @zero pushed 1 commit to tx_idx: 3e1a2e7ffc: validator/runtime: add missing tx_idx hey i'm trying to set up a local dev environment and have a single node running but i get an error with `./drk wallet --keygen` Failed to generate keypair: QueryPreparationFailed the wallet.db in the drk config exists noot: are you using contrib/localnet/darkfid-single-node? yeah exactly you need to keep using the -c flag so from in that folder: ../../../drk -c drk.toml wallet --keygen as shown in the contrib/localnet/darkfid-single-node/README.md cool that worked thanks! i'm getting an error with wallet --address though Failed to fetch default address: RusqliteError("[default_address] Default address retrieval failed: RowNotFound") also using -c i already got the address with --keygen but not sure why --address doesnt work? iirc --address brings the default one so it will fail if you haven't set one up hence the inbeetwin step of --default-address 1 ah okay cool sorry was going off the main docs, will go off the readme all working now :) had an issue using drk as a dep in the external swapd repo, opened a pr to fix https://github.com/darkrenaissance/darkfi/pull/267 Title: load contract bytes at runtime in deploy_native_contracts by noot · Pull Request #267 · darkrenaissance/darkfi · GitHub let me know any thoughts or if theres another preferred method to fix noot: I don't think that fixes the "problem" *.wasm are build artifacts oh okay, this loads them at runtime so they should be built right? i tested it and it fixes the compilation err it loads the already building .wasm file to store in the contracts db the wasm itself comes from the contract compilation are you using a local path to the repo to check that compilation passes? it works for patch with git = "https://github.com/noot/darkfi" the issue is the path in include_bytes its looking in src/validator/../contract/money/darkfi_money_contract.wasm`: No such file or directory like its not expanding the path correctly when its being used as a lib? yeah probably, since its not part of the darkfid lib so its an explicit path so inside cargo cache, it will try to find the .wasm files two folders up which are not darkfid root the question is, is CARGO_MANIFEST_DIR set up when invoking build, or its something you need to set manually? its auto set by cargo to wherever the cargo.toml for that crate is aha and since its a workspace it gets the repo root Cargo.toml hence it can find the contracts path yeah exactly just makes it nonrelative so it always expands correctly does the repo build normally using that? I mean did you execute make clippy on darkfi? yep it builds could be usefull to do the same in all instances of include_bytes! inside src, so we don't come to same issue in some other import obviously by a new macro like include_bytes_using_manifest or some shit cool yeah, i can do that in a follow up pr im planning on restructuing the bin/drk crate so that i can use it as a wallet lib, unless theres another lib that would be better for using as a wallet? i think bin/drk is what i need for stuff like initializing/generating keys, checking balance and sending txs right well bin/drk is the wallet impl itself, it can act as wallet related functions as well I guess but its not a lib perse yeah im planning on restructuring it so i can use it as a lib basically just making a lib.rs is there another crate thats already a lib that has wallet functionality? no everything is confined into bin/drk ok cool https://github.com/darkrenaissance/darkfi/pull/268 Title: restructure bin/drk as a library by noot · Pull Request #268 · darkrenaissance/darkfi · GitHub smol pr ready noot: it will compile, but running will fail, since Drk uses an RpcClient, to perform requests towards the nodes rpc endpoint, and when that gets created it checks if it can establish connection so when the code that uses must also have an active rpc client if thats ok all good yeah, thats fine this is just to make the drk structs/methods accessible yy gotcha, btw cargo.lock changes should be in a separate commit, better to not commit it all, since we do deps version updates regularly anyway I'm off, will check more thorough tom oh okay, the cargo.lock changes were autogenerated tho should i just remove? sounds good :) yy probably because your local caches use later versions so they got pushed into the cargo .lock since they are all minor versions anyway glhf ahh true okay will change back fyi clocks changed hanje-zoe, what do you mean the clocks changed? (just curious cause idk) gm morning brawndo Clocks changed means daylight saving time oh okayy, thanks. loopr: delete "impl WIF for Foo" from wif.rs completely. introduce a WifError enum and use that instead of using Strings. WifDecodeError(&str) becomes WifDecodeError(WifError) .is_err_and(|e| e.to_string() == "WIF decode failed: checksum failed to compute"), so normally there's WifDecodeError but if you're interested in the specific Wif error itself, then you can unwrap and look at the inner WifError to see it // take the first 4 bytes of the second SHA-256 hash; this is the checksum. this comment is wrong builder.extend_from_slice(&darkfi_serial::serialize(self)); self.encode(&mut builder).unwrap() (same for checksum) for the checksum, you have the code to compute it written twice, instead remove verify_checksum(), and make a function called "compute_checksum()" (or similar) anyway that's nbd tbh the main ones are 1. the error strings 2. the impl specializations 3. the serialization also the last lines in from_wif() are uneccessary. why are you doing this? Err(e) => Err(crate::Error::from(e)) also you don't check the decoded bytes length so your code will panic : @zero pushed 1 commit to master: 2919a595f1: runtime/merkle & smt: implements DB_roots format documented in the book arch/dao page. We store all merkle roots together with information about exactly when that root occurred. To store when the root occurred, we use an absolute location of (block_height, tx_idx, call_idx). Right now tx_idx and call_idx are hardcoded to 0 since the env doesn't yet have access to this info. You can't have the tx_idx (if that's a block) inside runtime during verification It's agnostic to that is that because of the mempool? we have the block_height block_height is the height at verification time It can be different in the mempool and inside a block ok is there a way to get the height and idx from a tx hash? Maybe from an existing finalized tx so i store tx_hash instead of (height, idx) but later once confirmed, i want to get its (height, idx) from the tx_hash .get_tx_location(tx_hash) -> (u32, u32) u64,u64 they should be u32 u64 is too big for block height We'd need a database modification for those kind of pointers upgrayedd worked on it, perhaps something is in place already Why is u64 too big? ok, yeah it should exist esp for block explorers the max size of u32 is 4.3 billion which is more than sufficient. that would take 12,257 years to reach ok i'm happy you're aiming long term tho :D u64 is 52,644,817 MILLION years. I think humans will be long extinct by then https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/blockchain/tx_store.rs Title: darkfi/src/blockchain/tx_store.rs at master - darkrenaissance/darkfi - Codeberg.org So here we have k=hash,v=tx We can likely have an additional tree that maps the hashes to blocks yeah or just add it with the tx data how do i get the tx_hash inside wasm? (of the current tx) do we need to pass it into the runtime params? i guess yeah the Env should have it Ot It's not there but I suppose you can add it ++ you don't need to map hashes to blocks, since you have the other way so from runtime you can get a blocks txs by its height I don't remember if we have that, but its already mapped You don't know which height a tx is in the only reason to store the actual idx is for performance, so you don't have to look for your tx inside the vec brawndo: you do, verifying height wait I'm talking finalized hanje-zoe: ^ if you mean a new one, then yeah there shouldn't be a tx_idx when you are verifying it Can you explain what you need to upgrayedd? He knows this inside-out :) since its not index against a block hanje-zoe: the way I understand it, you only need the tx hash, since this is going to happen inside runtime, for which we have the verifying block height(actual block height for blocks, next block height when we are verifying new txs in the mempool) so we only need an fn which gets the verifying_block_height and the tx hash as parameters if the verifying block height exists, find the index of the tx_hash in its vector and we crate a runtime function invoking that db function since you will have the tx_hash store as part of the contract execution params, you don't need to pass it to the runtime again so runtime changes will be just a new function we expose let me know if I don't understand something I don't think its required to pass the tx_hash as part of the Env, but surely can be added since it might be a common contract execution param : @zero pushed 1 commit to master: 51d7f2996a: dao::propose(): when a proposal is made, we snapshot the current coins root (old) AND nullifiers root (newly added) upgrayedd: i will add tx_hash to runtime, and use that instead of (block_height, tx_idx) since you both made good points, esp re: mempool but we still should have tx_hash -> (height, idx). You could even include that data in the tx_hash -> tx_data DB so that it becomes tx_hash -> (height, idx, tx_data) Yeah those can be baked in on finalization as to why i need it: well there are many reasons this is a good idea in general. Given a tx_hash, you should be able to lookup the data and also find when/where it was confirmed (for example you want to display in a wallet the timestamp, or count the number of confirms, or show in a blockexplorer when it confirmed .etc) specifically my need is this: yeah tx_hash -> (height, idx) can be an auxilliary tree, with the record created on finalization as brawndo said very easy and fast to do - i lookup some data which was added in a tx, but this data should not be too old. So i want to get the block height from the saved tx_hash (used when saving this data) and check it's not too long ago (current_verifying_height - tx_block_height) aha so its like looking for an olders record absolute position ok yeah the tx_hash -> (height, idx) is a very standard call to have gotcha gotcha gotcha yep ++ kk will add it then in a bit you want it also exposed in the runtime right? to call it from inside wasm yep but i can do that part nah I will add them all in one go ok hehe ty brb rebooting srv yesterday i told you about my branch but it's incorrect so i'll delete it (based on above convo) hanje-zoe: we should confine these additions into single commits, so they are easily reversible since its basically a new fn we expose to both runtime and db yeah its not needed just one clarification: the tx_hash you want to add as a runtime field, will be the tx_hash of the verifying tx, not the old record you want to access its absolute position, that have to be a contract param yes correct ok so you just add it as an extra abstracted info that might be usefull to all contracts b !list Topics: 1. Header::height should be u32 (by hanje-zoe) 2. add call_idx to env => remove from process() ix (by hanje-zoe) : @zero pushed 1 commit to master: affcde18d8: book/arch/dao: change (block_height, tx_idx) to simply tx_hash since mempool txs won't have this data. upgrayedd: ^ check this, it should give you specific context when making a DAO proposal, anybody can pick any merkle root, but they shouldn't choose one that is really old brawndo: just a tldr of what you missed: gonna add a new auxiliary tree storing tx absolute position in tx_store and then a get_tx_with_absolute_position(tx_hash) -> (block_height, idx, tx) exposed to the runtime Sounds good Remember to include gas fees if you add an auxilliary tree, why make the call also return the tx? why not just the location? hanje-zoe: maybe the runtime exposed fn should be split brawndo: lol I was thinking about the fees exactly yes split them because the tx data will waste gas since grabing both the indexes and the tx means more gass ggizi i avoid doing unncessary serializations and hashes, often precomputing stuff because it's easy to exhaust gas hanje-zoe: well they are both useful yes but make it 2 calls then yeah that what I'm saying aha yeah good darkfi/src/contract/money/src/entrypoint.rs:177 re: exhausting gas get_tx_absolute_position(tx_hash) -> (block_height, idx) and get_tx_with_absolute_posotion(tx_hash) -> (block_height, idx, tx) is it 1 DB or 2 DBs? one db different trees the second call also grabs the tx from the main tx_store tree while the first one just the record from the auxiliary tx_store tree yeah so better is get_tx_location(tx_hash) (or absolutely_best_position) and get_tx(tx_hash) is there any benefit to combining the calls? aha so you mean to skip it all together since we already got get_tx(tx_hash) we just need to expose that to the runtime so instead of doing a combined call, you will do: get_tx_location(tx_hash) and then a get_tx(tx_hash) if you also need the full tx yep it's better imo ++ it might be faster to use a single tree but nbd you can always do &data[8..] just for the tx data let me think iirc sled will grab the full bytes for the record so when you want just the position you will also load the full tx into memoryu Yeah you'll still use the memory for a bit so not optimal ah annoying, ok nvm btw run make clippy and fix failling stuff :D my db in libbitcoin would return a record and you could indicate which data you will read aha ty will do oh like an indexing record or metadata showing which bytes are what, so you just grab those we can't do that with sled, as the stored bytes are retrieved as a whole hence why we use the aux trees record just is a pointer to a location but the get doesn't return any bytes you then have to call read on the record and indicate which part of the record you read like a cursor huh? which you specify the position to read from yeah like a cursor gotcha gotcha gotcha there's a lot of cool tricks you can do where you write all the txs to disk non-atomically. it's nbd if you rollback because the tx_hash still exists in the db but who cares : @zero pushed 1 commit to master: 4f2f660d61: money/integration: add missing import for `darkfi_sdk::blockchain::expected_reward` : @zero pushed 2 commits to tx_hash_unwrap: a5026e9684: tx: change tx.hash() -> Result to tx.hash() -> TransactionHash, by calling .unwrap() on blake3 hasher. This should be safe (see code comment in tx/mod.rs:188 inside fn hash() ) : @zero pushed 2 commits to tx_hash_unwrap: 6a6a664d3e: runtime: add tx_hash to runtime params upgrayedd: can you check this branch ^ before i rebase on master? this commit: a5026e9684 because tx.hash() no longer returns a result, it means a lot of validator functions can return () instead of Result<()> hanje-zoe: Blake3 hasher .update() method never fails. source: "Trust me bro"? upgrayedd: https://docs.rs/blake3/latest/blake3/struct.Hasher.html#method.update Title: Hasher in blake3 - Rust oh ok then yeah removing result is good, less clutter i'm checking the blake3 code to confirm yep it's safe https://github.com/BLAKE3-team/BLAKE3/blob/master/src/lib.rs#L1524 Title: BLAKE3/src/lib.rs at master · BLAKE3-team/BLAKE3 · GitHub literally just self.update(input); Ok(input.len()) : @skoupidi pushed 1 commit to master: f9a58ca5ad: runtime/import/util: minor optimizations retrieving block info stuff yy checked it also XD : @zero pushed 2 commits to master: 5c9e3bd4a1: tx: change tx.hash() -> Result to tx.hash() -> TransactionHash, by calling .unwrap() on blake3 hasher. This should be safe (see code comment in tx/mod.rs:188 inside fn hash() ) : @zero pushed 2 commits to master: 0967744635: runtime: add tx_hash to runtime params i only recently learnt about git add -u and it's been a gamechanger : @zero pushed 1 commit to master: 9878fff12d: runtime::{merkle, smt}: change value for roots_db from (blk_height:3, tx_idx:2, call_idx:2) to (tx_hash:32, call_idx:2) What about -p ? i know that one, but -u is more useful hanje-zoe: what ACL should get_tx_location(tx_hash) and get_tx(tx_hash) have? I reckon Exec is enough for now, or you need it anywhere else? metadata likely yes metadata and exec only : @skoupidi pushed 3 commits to master: 254af116f4: blockchain/*: minore code look cleanup : @skoupidi pushed 3 commits to master: 0475a8e2d3: runtime/import/util: corrected some log targets : @skoupidi pushed 3 commits to master: 7d4151c230: sdk: fn get_tx(hash) added : @draoi pushed 6 commits to master: 69c6530a5d: net: move downgrade to greylist into remove_sub_on_stop()... : @draoi pushed 6 commits to master: 11c65a7705: store: cleanup move_host() to reduce code reuse... : @draoi pushed 6 commits to master: 1f1bfd3dce: net: flatten move_hosts() so unregister call happens outside function... : @draoi pushed 6 commits to master: c794507458: store: fix logic error in is_connection_to_self() : @draoi pushed 6 commits to master: 1cd330b798: net: create RefineSession... : @draoi pushed 6 commits to master: 52e6ea0530: net: implement greylist downgrade and goldlist upgrade : @skoupidi pushed 1 commit to master: 70aeb839f5: blockchain/tx_store: unified main tree with the pending tree into single store structure !topic darkirc status Added topic: darkirc status (by brawndo) !list Topics: 1. Header::height should be u32 (by hanje-zoe) 2. add call_idx to env => remove from process() ix (by hanje-zoe) 3. darkirc status (by brawndo) biab Anyone there? vn7: yes : @skoupidi pushed 2 commits to master: ea93623ff8: blockchain/contract_store: unified wasm and states trees into a single store structure : @skoupidi pushed 2 commits to master: 34dd60a9b7: blockchain/block_store: unified all trees into a single store structure : @draoi pushed 1 commit to master: cb821b651e: chore: move excessively verbose debug statements to trace draoi: since you are doing p2p log targets, move everything into at least debug wydm why to not log stuff like connected ips or connection errors to peers(not seeds) under info oh rly, isn't that useful tho? upgrayedd: lets discuss in meeting, i think p2p output is quite reasonable : @zero pushed 1 commit to master: fbe13ed480: DAO::propose(): goodbye nullifiers, hello SMT !topic remove get_verifying_block_height_epoch() Added topic: remove get_verifying_block_height_epoch() (by hanje-zoe) hi greeets draoi: yeah but they shouldn't be part of main bin/app logs, hence why moving them to debug hanje-zoe: sure brb i just realized a bunch of functions in runtime/util as missing proper ACL like? b height, time .etc these are read functions, anyone can call them anywhere you're not allowed to read in update() section 4Hi 5hellos ain't the call going to fail then? if not then yeah ecl should be added get_verifying_block_height_epoch oops i meant darkfi/src/runtime/import/util.rs:222 i'll go through runtime after dao and check everything has proper perms yo3 glhf frens? !list Topics: 1. Header::height should be u32 (by hanje-zoe) 2. add call_idx to env => remove from process() ix (by hanje-zoe) 3. darkirc status (by brawndo) 4. remove get_verifying_block_height_epoch() (by hanje-zoe) hi ohaiyo height, tx_idx and other indexes should be u32 !start Meeting started Topics: 1. Header::height should be u32 (by hanje-zoe) 2. add call_idx to env => remove from process() ix (by hanje-zoe) 3. darkirc status (by brawndo) 4. remove get_verifying_block_height_epoch() (by hanje-zoe) Current topic: Header::height should be u32 (by hanje-zoe) >t. isn't planning on living forever gm brawndo: your face when https://www.youtube.com/watch?v=uD4izuDMUQA Title: TIMELAPSE OF THE FUTURE: A Journey to the End of Time (4K) - YouTube nice vid whats the u32 life expectancy? 13,000 years yeah what's the rationale for decreasing the size? u64 is like 500,000 million years or sth also nonce should be u32, it will take 1000s of years to generate the max number of hashes darkfi 1000 year dynasty darkfi forever ok about nonce, but why reduce the height? as HCF whats the ranional? >>> (2**32 - 1)*90/(60*60*24*365) 12257.326755136986 32 bits is more than adequate to represent 12,257 years of darkfi blockchain so the argument is to save space? whats the practical implication u64 is still smol to store it's the wrong data type there is no wrong data type u64 is for fine grained data like nanosecs or bytes in a computer, for measuring things only wrong usage u32 is for counting objects, measuring sizes .etc https://developer.bitcoin.org/reference/block_chain.html Title: Block Chain — Bitcoin the practical thing I'd be concerned about is potential bugs with casting, e.g. `as u32` and so on it might not come up well i have to cast to u32 everywhere but the codebase is kinda loose with these sorts of conversions it's possible to truncate values by mistake sure but we will never have a block index that doesn't fit in u32 i agree with you on that, and sure u64 is probably way bigger than necessary just noting that we have to be careful with conversions if we change something it's an easy thing to miss. clippy can help though if we configure it to flag those things for us the best way to deal with those conversions is to use specialized types like BlockHeight I'm not opposed to that https://docs.rs/lightning/latest/lightning/chain/trait.Listen.html Title: Listen in lightning::chain - Rust The type aliases can still be cast at will btw https://developer.aleo.org/leo/operators/#blockheight Title: Leo Operators Reference | Welcome to Aleo Documentation everyone using u32 is 90 in the above calc in seconds, the block finality time? re btc: block production is capped so it makes sense to use the smalles appropriate data type yes ++ the "everyone using it so let's also" is not a proper argument... upgrayedd: you mean the reward, right? blocks still get produced when reward goes to 0 I'm not opposed to change it, just saying that argue the change with real stuff cosmos and solana use u64 hanje-zoe: yy, whenever you have something cap you can always use whats best to reduce storage footprint ah well it's the most meaningless pointless optimization upgrayedd: bitcoin blocks will still get produced and everything operates as normal (with fees) its not meaningless or pointless... yeah I gave it as an example to when it makes sense to use smaller types ok i put the decision to you guys ;) just offering my input that u32 is big enough, but if we want u64 for other reasons like compatability or so then it's nbd saying "oh we will have died by then" is not an argument imho we will have died 130 times over lol I wan't to produce eternal code that will run in a blokes neurallink in 10m years why cap it from now, when I know I can make it future proof? lol (again not imposed, just stating my views) 13,000 years in the future ok i admire the sentiment, lets cont...? ++ ++ ++ ++ !next Elapsed time: 17.5 min Current topic: add call_idx to env => remove from process() ix (by hanje-zoe) ok so i want to use (tx_hash, call_idx) as the absolute location of a call, so in runtime host functions, i need access to the call_idx i was thinking to add this into the runtime. It could even be updated between calls to avoid spawning Runtimes in the loop if it's added to the runtime params, then we don't need to pass it into ix anymore (it's redundant) darkfi/src/contract/money/src/entrypoint.rs:133 call_idx is (or has been) used in wasm functions to check the position of "self" call, but also other calls within the transaction ^ it's also missing from Deploy yeah so it would get moved into being a host function For various verification reasons, and data retrieval "avoid spawning Runtime" in the loop -> Not good behaviour, it should be reinstantiated with each call ok that's better should i move it out of ix, and into runtime then? Why do you need it in the host? https://darkrenaissance.github.io/darkfi/arch/dao.html#db-merkle-roots Title: DAO - The DarkFi Book i have the tx_hash but not the call_idx when creating a snapshot also it is missing from some places like Deploy see here: darkfi/src/contract/money/src/entrypoint.rs:133 right now runtime has: verifying_block_height, tx_hash, ... it seems natural to add call_idx too since it's a part of the 'calling frame info' Sure ok ++ !next Elapsed time: 6.6 min Current topic: darkirc status (by brawndo) Just make sure all the tests work Write rustdoc Do clippy lints etc. will do ty What's the status with darkirc, I see people are using it? Should we be deploying more nodes? there was a major update to net here: 1cd330b7986fb1a575ce3813e1db10f21b774b30 it fixed a panic so if everyone could update that would be great i've updated, seems to work i'm nearing finishing dao. was hoping to finish today, then will review net code however i also want to modify src/net/test.rs to be more expansive and cover many diff cases Nice, I'll setup a public node then and observe it for a while i had some trouble syncing the event graph from my local node post-upgrade, it was complaining about a missing parent tip dasman: what's the status on the event graph debugger tools? nice you made the refinery into a new session, that's a good use of session draoi: we have a working model rn, but working on a better one to read, like a browser also about darkirc, there are some garbage nodes being shared around the network (i.e. not accessible peers that clog up the hostlist), i believe updated nodes should no longer share these peers, but lmk if anyone notices anything to the contrary navigating through layers, and then in each layer navigate through events themselves and more details yes hanje made my life easier aggregating everything in Session and having a bitwise indicator to distinguish refinery processes yes hanje, ... (i didn't do that) also now there's an ownership structure, so the proceses get destroyed on p2p.stop() i was replying to you was just clarifying for others ++ Nice !topic xz Added topic: xz (by HCF) !list Topics: 1. remove get_verifying_block_height_epoch() (by hanje-zoe) 2. xz (by HCF) upgrayedd: can i delete this call? !topic repo status (wallet) Added topic: repo status (wallet) (by brawndo) there's a few cleanups still to do, now for example since we can distinguish refinery stuff we don't need to print an error when a node disconnects following refinery process (expected behavior) is it used anywhere? !next Elapsed time: 5.9 min Current topic: remove get_verifying_block_height_epoch() (by hanje-zoe) darkfi/src/runtime/import/util.rs:240 darkfi_sdk::blockchain::block_epoch(env.verifying_block_height) i could instead do: darkfi_sdk::blockchain::block_epoch(get_verifying_block_height()) so it doesn't need to be its own host function (good to reduce host functions where possible) oh yeah good then yeet ++ !next Elapsed time: 1.5 min Current topic: xz (by HCF) in the last few days there was a backdoor discovered in the tool `xz` (which does compression like zip or tar etc.) I think the consensus is that most people aren't affected but would still recommend people check their servers and workstations for a vulnerable version ++ general solution is to update with pacman, apt, etc. which will downgrade to a safe version ACTION not a systemd user NO LINUX IS MORE SECURE Yeah do your updates folks upgrayedd: ah yeah so i heard it's due to systemd, right? imagine using systemd i think it is integrated with systemd, yes hanje-zoe not due to systemd the exploit makes use of systemd afaik i was trying to understand if that's the case, only systemd lol it just expects you to use it so if you don't it fails to do its job aha wonderful ACTION not affected ACTION we are  so e.g. void was on the backdoored version, but not an issue unless systemd was also manually configured for void who uses void and systemd? lol contrarians Still, perform your system maintenance anyway good advice HCF, updates are not lighthearted ++ yeah for sure, also we do have install instructions for like deb-based stuff so wanted to give a PSA that people might be vulnerable word i love updating, it's so low effort and feels like a mini-christmas even though most here are too advanced for such basic OSes the only deb based that matters is devuan and its derivs Update your Macbooks sers lol esp when you see gimp or blender have a new splashscreen and updated styling !next Elapsed time: 4.5 min Current topic: repo status (wallet) (by brawndo) current status: broken Wondering what's left to do before we launch a testnet hanje-zoe: run make clippy before commits lol DAO is being finished I suppose? yep i will, just on a roll rn will fix it today later Then the last thing is the wallet updates (I think) nice yes DAO is so so close, it took way too long because of indecisiveness yeah I guess so, and test all the functionalities somewhat work locally Yeah but we did benchmarks and now should be finished tmrw pre-push hooks could be nice even sync is not "crusial" for testnet I started on an external smart contract for IRC ratelimit nullifiers coool So I want to use the testnet to deploy it there i just updated DAO::propose(), now doing DAO::vote() (will add call_idx .etc) And we could have a testnet RLN stuff working that's wicked ++ hanje-zoe: I added the get_tx() runtime call, will add the get_tx_location() in a bit maybe we could give a grant to weechat to implement proper buffer ordering had to do some impulsive cleanup on the blockchain database code upgrayedd: ty np, i just have the code commented it's no rush, it will be ez to upgrade once ready hanje-zoe: Only if you say it's for Matrix :P (Then they'll do it) is that a joke? Yes lol ah oki lol i think they want to add edit/del messages to IRC which would bring it into the modern age needs gifs and emoji reacts I can't take it anymore plz no hahaha IRCv3 has reply to message tho :heart: :fire: they are talking about base64 embedded images ⊂(◉‿◉)つ would be nice for latex rendering lets not deviate... kek end the meeting and continue the degen discussion after !next Elapsed time: 4.8 min No further topics !end Elapsed time: 0.0 min Meeting ended devs gotta eat It's 1st of April so I approve everything oh yeah happy april :D lol oh that's great cos i want to add a russian roulette function to the runtime Good meeting \o/ ++ o/ hanje-zoe: whackd.exe o/ haha anyway cu later, glhf everyone cya cya all bye enjoy your meal upgrayedd Testnet in two weeks(TM) here_we_go_again.jpg > hanje-zoe: loopr: delete "impl WIF for Foo" completely draoi: What are the seeds to use for the current darkirc network? but then I can not write any test no? loopr: put it in the mod test darkfi/src/zk/gadget/smt.rs:243 ah ok want to say smth about the generator constants stuff? at least one of these seeds should work: seeds = ["tcp+tls://anon-fore.st:5262", "tcp+tls://xeno.tools:5262", "tcp+tls://dasman.xyz:5262"] brawndo: shouldn't this code be an assert (inside WASM)? darkfi/src/sdk/src/util.rs:71 brawndo: if you recv hosts which look sketchy (specifically many many peers with the same ip but diff ports) lmk which connection you recv them from if poss loopr: look in src/sdk/src/crypto/constants/fixed_bases/, then you see in zkas proof files (*.zk files project wide) we have a constants section at the top of the file. These load constants inside the ZKVM (see darkfi/src/zk/vm.rs:654 but zkas needs to be modified to allow specifying arbitrary constants, which are generated using the halo2 function find_zs_and_us() (as well as supporting these hardcoded constants) biab upgrayedd: how does get_last_block_height() and get_verifying_block_height() differ? aren't they always the same thing? : @zero pushed 1 commit to master: 69cf9c3a1a: runtime: add missing get_tx_hash() hanje-zoe: Avoid asserting when you can hanje-zoe: when you are retrospectivly validate blocks no, last_block_height can be greater that verifying_block_height draoi: Thanks, I'll try it soon also verifying block height in normal operation would be last_block_height + 1, but again depends on execution pattern their usage right now is to validate runtime correctness aka the node is correctly using the proper last+1 height as its veryfing height in PoWReward if they try to validate a PoWReward call thats not following this pattern it will fail aha thanks brawndo: it's an internal bug though if this is triggered so it should be fine to assert here (since it's a logic error if so) : @zero pushed 1 commit to master: 9074440105: runtime: remove call_idx from the payload, and add it as a Runtime param upgrayedd: ^ you might want to check i haven't made any errors there (it's a small commit) : @zero pushed 1 commit to master: 98fb3af981: runtime::{smt, merkle}: add missing call_idx to db_roots data : @zero pushed 1 commit to master: 732b9ae38a: drk: 'fix' make clippy error (just a temp patch) : @zero pushed 1 commit to master: 83f5898de5: DAO::vote(): now with SMT flavor DAO is done! \o/ noice : @skoupidi pushed 1 commit to master: dad7577bed: blockchain/tx_store: new tx location tree added hanje-zoe: ^^ Sended a PR https://github.com/darkrenaissance/darkfi/pull/266 Title: Minor documentation improve - Clarify anonymous assets page. by AgustinBadi · Pull Request #266 · darkrenaissance/darkfi · GitHub just a minor fixes in the docs : @skoupidi pushed 1 commit to master: 85c80e1bd3: blockchain: store txs locations using the new tree : @skoupidi pushed 1 commit to master: e8cb2d1f51: script/research/blockchain-explorer: updated to latest darkfi structures reporting in, ready to get buttered crumpet https://ibb.co/YfF2Hg3 Title: sandwich-mayo hosted at ImgBB — ImgBB just what I've been looking for can't get this script to aggregate all channels to work hanje-zoe: > also the last lines in from_wif() are uneccessary. why are you doing this? Err(e) => Err(crate::Error::from(e)) Because the Encodable/Decodable traits don't return a darkfi::Result/Error, but std::io so afaics it's either that or we change the return types in those traits? hanje-zoe: nice spotting the potential underflow when cutting the decoded bytes, thanks should we allow empty object encoding/decoding? currently updated PR to check for a min length of 6 decoded bytes - 4 checksum, 1 prefix, and at least one payload ah ok well do Ok(foo?) instead instead of match ... Err(crate::Error::from(e)) : @AgustinBadi pushed 2 commits to master: 89bc896bd2: Improve anynomous assets page - Clarify terms of the explanation : @AgustinBadi pushed 2 commits to master: bb47e7ef0b: Improve anynomous assets page - Add minor explanation otherwise looks good, Ok(Self::decode(&decoded[1..decoded.len() - 4])?) : gm gm greets sire gm o/ : @draoi pushed 1 commit to master: 4d0c36a508: hosts: reject peers from other hosts that already exist on our greylist... : @zero pushed 1 commit to master: 9188a62bb3: smt: simplify ZK gadget. Use `root = sparse_merkle_root(pos, path, leaf)` instead of the more complicated `is_member = sparse_tree_is_member(root, path, pos, leaf)` draoi: I can see my darkirc making duplicate connections (in the logs) Connecting outbound slot #3 [tcp+tls://31.10.162.198:63304] Connecting outbound slot #2 [tcp+tls://31.10.162.198:63286] Connected Inbound #0 [tcp+tls://31.10.162.198:63316] upgrayedd: i introduced a new TransactionHash type in sdk/src/tx.rs, i'll update your SDK funcs to it but np if you prefer blake3::Hash then we can switch to that instead Connected Inbound #0 [tcp+tls://31.10.162.198:63356] they are not duplicates, the ports are different these are spammy/ hostile nodes not sure why they're still being broadcast, i'm receiving them from the dasman seed there might be something i'm missing... looking into okay\ Maybe it would be useful to have an array of IPs we can manually blacklist in the config toml yes that's a good idea : yo : Hi p : nice :D heh I don't see your msg there damn did your event graph sync already? : echo echo back : echo back Yes on attempt 1 nice I'm not running in verbose mode though could be my event graph : hi : Hi Not seeing it, no damn i can see the event graph doing stuff when i sent a msg, like it seems to be working upgrayedd: also your db is saving (block_height:u64, tx_idx:u64), at least the tx_idx should be u32 (or even u16). you halve the size of this internal DB while leaving the public API to use u64 : @zero pushed 1 commit to master: b24cde844c: dao: add usage of get_tx_location() : @draoi pushed 1 commit to master: 84dcc54433: net: add blacklist field to settings and avoid for duration of program. : echo echo back : echo back : Now I'm seeing the history draoi : ok nice : ah yes, and this too :) : yay : I restarted with -vv lol : That slows down the daemon considerably though : @zero pushed 1 commit to master: b6e8c00243: replace all data strings output as [123, 78, ...] with big endian hex strings. : did you rebuild w the blacklist feature? i blocked dasman seed and it cleans up my node activity significantly (no longer receiving spamming peers) : btw what do you guys think og having blacklist avoid connections to a Url host_str rather than host_str + port? i.e. we block all connections regardless of the prot : Will do later, I want to observe the node's activity a bit : /s/prot/port : ++ : Why not support both? :) PKS/VKS in test-harness- will update these later once finished my patches : yeah both is good ugh nix is broken on gentoo again i didn't know you're a nix fan. I'm not, but it's the only thing we can use to have reproducible wasm builds ah : @zero pushed 1 commit to master: afa66e2bb0: MerkleNode: display as hex string instead of base58 : @draoi pushed 1 commit to master: 318c7bef49: inbound: remove duplicate call to unregister()... ^ whoops, that was a bug hanje: well blake::Hash and TransactionHash are both [u8;32] so it doesn't matter really whats is used, feel free to update everything for comformity ok cool ++ u64 was used for prototyping lol was going to reduce to u8 tbh when updated height to u32 u16 might be the sweetspot nice why did 89bc896bd26f7f9467e433ff9900ecbc2b100682 and bb47e7ef0bbb0827639991a985e2bb4d24097c6d were pushed to the repo? they are unformatted ah well i merged the changes by ash, nbd yeah its nbd, but don't blindly merge stuff ok i did skim it though and seemed good well the content is good, still since we use specific code/doc style, it should first follow it and then get accepted ++ understood thanks professionals_have_standards.jpg hanje-san did you check if the runtime fns work? I didn't test them i am debugging get_tx_location() now, i can't get it to work snippet? darkfi/src/contract/dao/src/entrypoint/propose.rs:155 if you uncomment this, then run `make test` inside src/contract/dao/ then you'll see the error. I'm now dumping the DB to see the contents (maybe the endian is reversed?) bruh check the rustdoc you have to deserialize the tuple src/sdk/src/util.rs::166 oh wait you change it? yes ofc the issue isn't deserializing, it's the tx hash brawndo: does that decoding count towards fee? since its in the sdk code, not in the wasm function Any non-host function will accumulate gas by the wasm metering it's quite fast and everything uses gas ok so sdk calls are accounted for right? yep hanje-san btw since you changed the fn, also update the rust docs lol yessir btw make test fails with: Parser error: Unimplemented opcode `sparse_merkle_root`. (line 48, column 22) had to first run make test on root repo I think it runs now ACTION promises to write comments & rustdoc afte yeah zkas changed so it needs to be recompiled (make zkas) yy I figured huh weird, i dumped the location tree, and it only shows 1 entry with a value of 00000000000000000000000000000000 what did you use to dumb? you can use script/research/blockchain-explorer but I guess you are dumbing in-memory db let tree = &wallet.validator.blockchain.transactions.location; for kv in tree.iter() { ... } mine fails at building dao mint tx rm test-harness/*.bin rm src/contract/test-harness/*.bin yy the vks/pks yeah the single entry is correct as db has only the genesis block which contains a single dummy tx at (0,0) the test-harness calls wallet.add_transaction() in lib.rs a bunch of times, which calls self.validator.add_transactions([tx]) remember test-harness doesn't generate blocks it simply adds the tx mutating the contract states ah ic tricky ah maybe i can just mutate the DB as a hack LOL yeah just add it directly to the DB haha oki append the location entry you have access to tx_store through the overlay yeah you even made .insert_location, rz ez ;) remember to also add the tx since iirc add_transaction only apply them, not store them what happens if i don't ah we are already adding the tx are you sure? where? self.validator.add_transactions(&[tx], block_height, true, verify_fees).await?; upgrayedd | since iirc add_transaction only apply them, not store them oh right you mean the contract store yeah that just applys the state transitions to the db headers,blocks and txs are only stored through blocks its an aux function nice good info, you saved me a lot of wasted time lol Check the money integration test to see how to make new blocks : @zero pushed 2 commits to master: 88c39e5861: dao::propose(): fix get_tx_location(), by making the test-harness write the txs and their locs to the DB inside wallet.add_transactions() : @zero pushed 2 commits to master: ed01a1a76a: test-harness: update vks/pks hashes, put back to info and fix docstring in sdk i think it's good actually to have specialized BlockHash and TransactionHash types looking over validator/consensus code, it's hard to see which blake3::Hash refers to what. they all kinda blend in together hanje-san: wrappers are always usefull codewise hanje-san: When staking for IRC accounts, should the staked amount be public? I'm not sure what to do in the case we keep it private, we could only do some threshold brawndo: i'd make it public and just have fixed denominations we can always make it more anon later Sure thing : @draoi pushed 2 commits to master: 24ec6fffd7: doc: add cautionary comments about unregister() : @draoi pushed 2 commits to master: c47630366c: refinery: acquire exclusive lock on greylist before refining... ^ fixed another bug this one was causing a panic... recommend updating nodes... sorry bros np brawndo: here? Hey yep hanje-san: updated with suggested fix ty how does Ok(Self::decode(...)?) get rid of an otherwise incompatible return error? did you try it? ran tests and they succeed oh it literally applies return Err(From::from(err)) for the Err branch hanje-san: I see that BlockInfo::full_hash() is not used, so maybe just yeet it? so it doesn't confuse anyone that might try to use it yeah that's a good idea, i'll delete it here gg loopr: #[derive(Debug, Clone, thiserror::Error)] <- can you remove thiserror oh wait nvm if it's ok, i'll directly apply the file since your commit log is messed up or you can cherry pick your commits onto a new branch, squash them and then reopen the pull req in the future, it's not good to make a branch, then merge master onto that branch unless it's a long running branch (this is for a feature) esp since you merged rather than rebasing hey I have some problems with taud. In deamon: "cannot discover addresses". when I try to add a task I get "you dont have write access". I rm config, pulled, recompiled and taud --refresh but it doesnt work. anyone could propose what I can do now? dasman: ^ reka: what seed are you using? same as before ["tcp+tls://dasman.xyz:23331"] can you refresh and comment the seed and set peers = ["tcp+tls://dasman.xyz:23332"] ++ : dasman started task (yWx0dR): event graph tool hanje-san: I have already been corrected to not do the master-to-branch merge I did it once now I got it and have been rebasing since feel free to complete if you want, I also already squashed once I thought or let me know your preferred steps : echo echo back : echo back hanje-san: ping https://ethresear.ch/t/optimizing-sparse-merkle-trees/3751 Title: Optimizing sparse Merkle trees - Data Structure - Ethereum Research hanje-san: Can you please review if this is fine for removing leaves from the SMT? https://termbin.com/r7f8 ok eating, will print and look tonight It's trivial, I don't think you should have to print it It's more about the API The usecase is for RLN, because for slashing, we remove the identity from the tree So further inclusion proofs become invalid just at night i try to avoid screens since i can't sleep otherwise do we do slashing? hanje-zoe: "but zkas needs to be modified to allow specifying arbitrary constants" you have already an idea/requirement for how that interface should look like? or shall I propose something iiuc each of the constant providers has to provide the same set of constants, which also can be empty so maybe a trait `ConstantProvider` with a `fn get_consts() -> ConstList` with `ConstList` being a struct with the required constants? why can I not find any references to the `synthesize` call in zk.rs? Hi upgrayedd: Sorry, saw that there was problem regard formatting the docs, how that is done? to learn and avoid it in the future loopr: do they? can you give examples? the way we operate is: write a doc, pass it around for review and feedback from others i suggest you write a proposal and we iterate on that Understood ash, i was referring to loopr, not you ash: just what upgr was saying is about the line limit, normally we use 80 chars. it's not strict though (we break it in a bunch of places) but good to adhere to Good, no more of 80 chars, it is the only format condition? besides a good looking .md file hehehe of course Got it, will do ash: yep it's the standard, see how we do all our text which editor do you use? have you ever tried (n)vim? I heard nvim and I am here to promote it /s our lord and savior are you an emacs user? Me? Would be a shame if I was I even went against the lisp developer and used neovim to write code in lisp. Damn I annoyed them with that. Not that we were serious about it. s/developer/developers brawndo: can we add itertools? https://lib.rs/crates/itertools it's used by a bunch of crates like solana-runtime, plonky2, cairo, starky, lightning css, an ethereum serialization lib, .etc and lots of well known projs Title: Itertools — Rust library // Lib.rs i don't have a strong usecase for a specific thing, but i think in general it can lead to tidier code, esp things like izip macro could improve things in a bunch of places, also .try_collect() is very useful let iter: Vec<_> = decode_hex("0a00").collect::>()?; so with try_collect that becomes let iter: Vec<_> = decode_hex("0a00").try_collect()?; but if you prefer not to, it's also good with me too airpods69: once other dev here is good and he uses gedit lol just write ze fucking kods wow lol I never used gedit, either vi or straight to neovim. Back then vscode or notepad depending on which OS I was on neovim/neovide (pretty cool neovim gui editor. I use it when I am writing something) gm o/ hanje-san: I used to have itertools in zkas then removed it Up to you tbh ah ok I just implemented what I needed yeah i have no idea tbh, i trust your instinct https://github.com/darkrenaissance/darkfi/blob/564089646d137ac91307b407dbd8d5d4b254ac1e/src/zkas/parser.rs#L1163-L1188 Title: darkfi/src/zkas/parser.rs at 564089646d137ac91307b407dbd8d5d4b254ac1e · darkrenaissance/darkfi · GitHub i like copying code snippets from random crates Yeah that's fine too under a proper license it's almost like they could just publish code samples/patches instead of crates You can add it the function's rustdoc (The copyright) ah good to know e.g. `Copyright (C) 1969 John Smith (Apache-2.0) ok ++ https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zkas/decoder.rs#L41-L44 Title: darkfi/src/zkas/decoder.rs at master - darkrenaissance/darkfi - Codeberg.org Or this lol ^^ Did you manage to look at the SMT patch? btw we have a bunch of WASM fns mixed with generic SDK stuff. should we consider putting all the WASM fns in a subdir called wasm/? i printed the text but my printer added margins that cut the text, will review it later today Such as? ok for example util.rs or merkle.rs, there is also crypto/merkle.rs but they are separate things these wasm fns aren't usable outside wasm, so maybe they should be segregated in a submodule Perhaps maybe we should have a 1-1 correspondence in src/sdk/src/wasm/ with src/runtime/import/ I tried to make one of them generic, but yeah it doesn't apply to everything (See sdk/log.rs) aha that's a good idea. why not override println!()? i think the logger crate disables println anyway nbd, msg!() is also clearer in wasm I don't like overriding functions hanje-san: i'm just making small polishes to net now, you can start reviewing if you want. just pushed some documentation but commit bot is down rn That's usually more trouble draoi: great, lmk once done and i'm ready just trolling brawndo rn lol brawndo: ok i agree, but lmk about SDK wasm stuff. if you think it's a good idea, i will just add a redirection for current functions and add a #[deprecated] tag then cleanup after for example db_get/db_set, i don't think can be generic Well Better to just do it fully than half the work also now i want to copy Itertools .try_collect() into util.rs but it's squatted by wasm fns ok i can do it np in a single commit, might take me 40 mins lol make clippy is your friend the TransactionHash was the worst but this should be ez what name should it be? wasm_api? darkfi_sdk::wasm::db::db_get ok 2 new nvim bindings this morning: " replace word under cursor with current yank. Good for replacing fn names nnoremap q "_dwP " delete function body nnoremap D Vk$%k"_d brawndo: instead of adding flags to block hosts with the same ports we're connected to, should we just add generic fail2ban support and let it handle policy like that? i see fail2ban is configurable to allow custom python scripts. so a python script could track which hosts we're connected to by scanning the logfile, right? would that work well? It wouldn't work well My idea was to just have a fixed array in the toml where you can put hosts you don't want to connect to ah ok draoi: you should make the filter policy a configurable trait and then make a settings section called [net.filter] that way we can later upgrade it as needed so rn we just have `blacklist` in settings which is a simple solution and works well yeah put blacklist under filter the idea was to add something like `blacklist_all_ports` which would allow us to blacklist all the ports of a given peer, if configured yeah like a HashSet you can use "*" to indicate, apply setting to all hosts make a trait called FilterPolicy with its own settings, and segregate all this stuff there that way later we can do .set_filter_policy(FancyPolicy) or DefaultPolicy .etc ok that makes sense Just do: "tcp+tls://foo.bar" if you want to block all ports And "tcp+tls://foo.bar:123" for a specific port ++ that's even simplier that can be represented with an enum You don't need an enum, you just need a hashmap of <"scheme://host", [port]> true if map["scheme://host"].is_empty() { block_all } else { block_ports } i thought we were just talking about rust Url which has the method port() that returns an Option that looks good too if port().is_some() .. yeah better than what i said You should have a blacklist hashmap brawndo: darkfi/src/sdk/src/crypto/util.rs:110 And if you are blocking all ports, then the map would be (host, vec![]) how can i modify this to be cleaner? Otherwise it would be (host, vec![1,2,3]) is there a way i can do .into() with a turbofish? match Self::from_repr(bytes).into() { Some(v) => Ok(v), None => Err(ContractError::HexFmtError) } Self::from_repr(bytes).into::>().ok_or(|_| ContractError::HexFmtErr) Look at that haskell dev the thing is i cannot use .into() with generic args so no idea how to coerce CtOption to Option What I gave you should work https://termbin.com/4g25 This passes clippy ok sure lol Option::from(Self::from_repr(bytes)).ok_or(ContractError::HexFmtErr) i added license attribution and project link for itertools: darkfi/src/sdk/src/util.rs:21 they don't have any copyright notice or year listed anywhere https://github.com/rust-itertools/itertools Title: GitHub - rust-itertools/itertools: Extra iterator adaptors, iterator methods, free functions, and macros. Choose the cuck MIT license haha i did I pushed smt del My test case is asserting that the inclusion proof fails after removing the leaf smt.remove_leaves(vec![(pos, leaf)]).unwrap(); assert!(!path.verify(&smt.root(), &leaf, &pos)); yeah the path changed btw you should move for _ ... {} into a fn called .recompute_tree(dirty_idxs) internal fn ok will do 0d3b3cf77c1b8d1a4e9debf9f1a534fe48df7c1b Why this though? You could've just imported the methods directly, not the module i already had the comment queued but got a rebase conflict in general throughout the contracts, we're using wasm::foo() rather than foo() *the commit queued Doesn't make sense i tried to push commits but there was a conflict I'm saying doesn't make sense to use wasm::foo instead of foo Everything is wasm there :D ah we can change it if you want, i thought it looked nicer lol everything in src/contract/ calling the host wasm fns is wasm::foo() maybe it should be host_api or wasm_host_api? It should just be foo() You import the functions you need on top of the module and you know where they come from We took this same reasoning against importing foo::* So let's stick to it Also IMO it looks ugly lol i don't feel strongly but to argue my point: i think the host functions are special, and indicating that with a prefix makes the code clearer at first glance Why are they special to a contract dev? A contract dev just wants to use the provided API and shouldn't care where the function is Nor care about the internals ok I think this makes it more confusing, since it's introducing another namespace And called "wasm{_api,_host}" on top of it all brings more weirdness sure i guess the contract dev doesn't care On another note I'm a bit scared of the asserts in runtime/ This crashes the node Why not handle some things gracefully instead of using expect/unwrap the node should crash It won't crash fully, it will panic a thread And leave the node in limbo oh that's bad, it should crash the node. How can we propagate panics? You can't i don't think changing asserts to errors is a good ideas Depends where i mean the node not crashing certainly is an issue but we use assert outside the runtime, so really it's no different inside the runtime or outside the runtime really since an assert is for a logic error which should be caught early Yeah though you won't see it You can build with `[profile.release] panic = "abort"` but you will not get the unwinded stack or the location I think https://github.com/renderlet/wander Title: GitHub - renderlet/wander: wander - the Wasm Renderer can we add that to debug? release should just not compile asserts maybe we should make our own assert macro so we can disable it in production builds what do you think? that seems to be the proper solution I think we should gracefully handle possible errors there's no point checking for errors that never occur otherwise you will go crazy trying to always gracefully handle all internal errors if it's a big deal, we can just delete all asserts inside runtime, but then why not just make our own macro which calls abort in development, but is compiled out in production? We always compile release Otherwise the code is too slow I suppose you could have a macro But that doesn't solve production You don't want to keep the node in limbo either Correctly handling errors keeps the node working properly i didn't say release, i said production so we control the flag ourselves the asserts get compiled out, but in dev version (debug or release), it will abort Yeah but I'm saying, what happens when you have a panic but you don't abort? nothing, it's ignored This breaks the node (That's also what we're currently doing btw) there's some incorrect asserts in merkle.rs, but lets look at 2 correct ones assert_eq!(latest_root_data.len(), 32); assert_eq!(value_data.len(), 32 + 2); if this fails to be true and the function exits, the DB will be in an inconsistent state and the contract using this call will be broken Yes so on those specific ones you want to abort So how about this: - Errors which can be handled, get handled - Unhandled panics should abort the program We can also set our own panic hook, but IMO it should always abort The issue is (not) being able to get the unwind We could try this: https://doc.rust-lang.org/std/panic/fn.set_hook.html Title: set_hook in std::panic - Rust However it's also important to know the difference and consequences of what happens when asserting inside wasm as well Right now it's caught by wasmer, but I don't know what happens if we change this Aborting can also leave things in an inconsistent state abort means crash and burn rather than keep trying to run after a fatal error ACTION doing a git-bisect for deployooor what's the command to reload config while the daemon is running again? What daemon? ah, any darkfi daemon? i thought there was some generic SIG UP or something command Only if implemented pkill -SIGHUP or sth isn't darkirc now doing /quote rehash or so If you want to do it in darkirc, you should do it through the IRC protocol https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/darkirc/src/irc/client.rs#L342 Title: darkfi/bin/darkirc/src/irc/client.rs at master - darkrenaissance/darkfi - Codeberg.org 13:19 isn't darkirc now doing /quote rehash or so Yep in general is it possible to do tho right and we should always consider that the configs can be re-loaded so initiating on start() will values from the config is a bad idea *with it's nice if the config can be modified while the app is running without requiring a restart think about if you have a GUI and change settings, then it needs to restart to reload, or at least stop the p2p network and restart it Yeah generally programs use SIGHUP as an interrupt and catch it Then do whatever needed in that handler it's not a big deal though if p2p needs to restart to reload a config so don't lose sleep over it if you can do it, then it's a small win, not a huge victory kk tnx https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/util/cli.rs#L259-L292 Title: darkfi/src/util/cli.rs at master - darkrenaissance/darkfi - Codeberg.org This will re-read the config and send it over to any subscriber ah cool https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/util/cli.rs#L230-L246 Title: darkfi/src/util/cli.rs at master - darkrenaissance/darkfi - Codeberg.org You'll get the SignalHandler struct And then you can access the sighup_sub ah perfect wtf emacs works in the browser https://agdapad.quasicoherent.io/#foo Title: Agdapad hanje-san, open a browser in emacs in a browser lol when i first discovered VMs it blew my mind that was possible for me it was when a youtuber (SomeOrdinaryGamers) made a VM in a VM and I audibly said "Wait a second, you can nest this?!" XD https://parazyd.org/pub/dev/random/TOS-Terry_Responds_to_the_Haters.mkv Terry used VMWare RIP King > Dianna Physics Girl if only he made it today he would be well off imagine HolyC + borrow checker https://upload.wikimedia.org/wikipedia/commons/4/4a/Terry_Davis_1990.jpg Title: Wikimedia Error : gm : gm : :D : It seems to work : At least afor our echochamber :D : event graph has worked better for me since i blacklisted peers returning the shifty addrs : hmm ... had to disable a lot of my previous seed addrs returning refused/unreach : only now DAG synced : yay, still running after darkirc stop/start hanje-san: I use vscodium I use Gentoo hanje-san: Hey, where's the contract code where you initialize a sparse Merkle tree in wasm? I mean when a contract is deployed darkfi/src/contract/money/src/entrypoint.rs:159 sorry darkfi/src/contract/money/src/entrypoint.rs:146 if the tree is empty we don't need to write anything at all because EMPTY_NODES_FP is precalculated ah that's the one Thanks Can we put those precalcs in the SDK? I have to use it in the RLN contract as well Hey a bit new to this and trying to learn. Followed the instructions for local deployment in the run a node section of the book (though I am using the latest tag rather than master). When I ran the tmux sessions it did something, but it doesn't really seem like anything is happening. How might I be able to check whether it is working as intended, or whether I made a mistake? brawndo: it's in the SDK too i mean it's in the SDK oooh ok sry I wasn't looking well btw money does extra stuff like saving a record of all the roots so when we get a snapshot of the coins root, we also want to check it corresponds to the same nullifier root Sure In RLN this is not neede so we compare (tx_hash, call_idx) for provided coins root and nullifiers root, and then we lookup tx location (block height, idx) using tx_hash, and we check the block_height is not too old *needed yeah cool, so all that other stuff you see there is related to that mechanism crumpet: that part of the doc is outdated, what you need to do is compile darkfid, minerd and drk in repo root and then run the tmux script In there it's just needed to maintain identity commitments put/del yeah so i guess the root must be NOW so go like(in repo root): make darkfid minerd drk -> cd contrib/localnet/darkfid-single-node/ -> ./clean.sh -> ./tmux_sessions.sh thanks upgrayedd (I thought you were supposed to be frozen), let me give those a go and see where I get do I have to do that in main or the latest tag is okay? just to be clear, localnet script is for dev, so don't expect to connect to anyone master I'm primarily trying to learn and help contribute if I can, so it's more experimental hanje-san: Yeah, the hardest part will be keeping it in sync with irc daemons just keep in mind latest code is PoW, so when you hear fans going brrrrr don't get confused But I expect a public node or smth the pc is thinking should we remain silent while it's doing that? brrrr hanje-san: preferably yeah, you don't want to interupt a witch during spell casting hanje-san: You initialize the db with roots_value_data, what's that supposed to be? Is it just some leaf? I need it initialized empty do you need all the roots? No then don't worry Just the latest one always you mean line 126 that's supposed to be (tx_hash, call_idx). i'm patching it now 148 I just need to store the tree itself The Merkle tree yeah line 148 not needed oh ok just 159 (i was confused before, but this is correct now) Gotcha Thank you np ofc! where is runtime put_object_bytes() used? seems like an unused fn same for get_object_bytes Dunno, that's one of yours :) oh, ok i'm restricting their ACL to [], and we possibly delete them I think you can just nuke them kim, press the button what about the objects store? can i delete if unused? oh wait nvm we use that ok : @zero pushed 1 commit to master: c781936b12: replace usage of blake3::Hash for tx hashes with TransactionHash type. Change all occurances of `let txs: Vec = block.txs.iter().map(|x| blake3::hash(&serialize(x))).collect();` to `let txs: Vec = block.txs.iter().map(|tx| tx.hash()).collect();` : @zero pushed 1 commit to master: 564089646d: sdk: add hex decoding fns : @draoi pushed 2 commits to master: 7ff45ea2a5: refinery: acquire exclusive lock on greylist before refining... : @draoi pushed 2 commits to master: 83549ccbf0: doc: add missing documentation to refine session and hosts : @zero pushed 1 commit to master: b9cc42cdf4: SDK: move all WASM runtime fns into wasm/ submod : @zero pushed 1 commit to master: d3404939aa: sdk/crypto/util.rs: replace ugly CtOption mess with cleaner version using .into() and match : @zero pushed 1 commit to master: 1cdbd03673: sdk/util.rs: add Itertools trait with .try_collect() method : @zero pushed 1 commit to master: dcf419b0ca: sdk: move find_subslice() and NextTupleN from zkas into SDK util.rs : @parazyd pushed 1 commit to master: f425397115: sdk/crypto/smt: Implement leaf removal support : @parazyd pushed 1 commit to master: d68619d84d: runtime/import/smt: Correct log message : @zero pushed 1 commit to master: 0d3b3cf77c: deployooor: fix broken WASM fns : @parazyd pushed 1 commit to master: 2ef2f6560b: sdk/crypto/smt: Move tree recalculation into separate internal function : @zero pushed 1 commit to master: 3a6707cb81: runtime/merkle: remove faulty asserts, we used to exit early with success if coins was empty but then that meant the merkle root -> blockheight wouldn't be updated. recently this was changed so the old asserts are no longer valid. : @zero pushed 1 commit to master: ac979a2e38: doc/wallet: expand scenegraph section : @zero pushed 1 commit to master: e0932c5c50: deployooor: fix instruction deserialization : @zero pushed 4 commits to master: cb4ef9b1fe: money: fix FIXME, add (tx_hash, call_idx) now the calls exist : @zero pushed 4 commits to master: b42cb611c5: runtime: review and update ACL perms for fns : @zero pushed 4 commits to master: 56d5281f6c: runtime: remove unused put_object_bytes() : @zero pushed 4 commits to master: dd341c156a: smt: add docstrings for runtime/SDK fns weird hanje-san: here? yrp yep u64->i64 will overflow I saw you use u32 for heights want me to update the headerstructure etc so its u32? as you wish, certainly we should be consistent cos there's a lot of u32/u64 mess in the contracts there any is fine, just we should pick one well if you need the i64, we can't use u64 since its an overflow so we have to go u32 yeah it would be more convenient for wasm, otherwise i have to use bytes and serialization stuff ok can you add the prefered values into doc/src/arc/consensus.md where we have the structs defined? so I can use that as spec ok got it and for tx location yep and call index yy all these just chug them all in docs to have as specs ++ btw are you going to do the HeaderHash we discuss or you want me to? could you do it? i have to review net and prepare a conf talk yy no worries I can do it ty just out of curiousity do you need it in the sdk like the TransactionHash? and does it need to have same fns, or a wrapper should be fine? since its mainly for better code readability no i don't. we could always move it there if needed inner is still the same [u8;32] kk then just a wrapper for readability yeah also safety so we dont mix the hashes well, all hashes are [u8;32] yeah but they cannot be mixed so its just code "safety" since in reality everything is the same wdym? they can be mixxed no you cannot use TransactionHash, it's not an alias if you have an fn with def fn foo(&[u8;32]) you can pass any blake3::Hash there yes that's the issue well I'm not talking code perse more like low level stuff like passing the key bytes to sled to retrieve it yeah i mean the API, i think it's easy for wires to cross but yeah I gotcha gotcha gotcha ok cool kk will try to cleanup src/blockchain so its fully consistend on whats what and go upwards from there just add the final(TM) size defs for height, nonce, etc etc yeah and deployooor too, i can do that if needbe hanje-san: does this look good? https://imgur.com/EvowAjM Title: Imgur: The magic of the Internet hanje-san: looked deeper at the code and now I have more questions :) grep -r find_zs_and_us does only yield binary matches is this something that is triggered manually or something like that? asking specifically regarding the dynamics of how the new or other constants will be generated will they replace the hardcoded ones? you said the existing hardcoded ones should also be supported, but does that mean that they will be used if not overwritten, or the "correct" constants will be selected dynamically, e.g. through a match or something? hope my questions are clear find_zs_and_us is a zcash function halo2 lib or somewhere (maybe orchard) we keep the current ones, but allow through zkas to specify our own generators which is done dynamically darkfi/src/zk/vm.rs:654 each of these values, OrchardFixedBasesFull, ConstBaseFieldElement, .etc they should be generalized : gm gm : gm hey all, I've been caught up with work lately but coming back to this now. Last month haumea suggested a change I could look into for zkas/parser.rs, in the match on token str, VarType could have a method to be created from a str is this something I can still attempt? I see there's a comment in the codezkas/parser.rs code line 688 TODO: change to TryFrom impl for VarType, is that related? deki: what do you see on those lines? how do you think it can be improved hanje-san: In WASM, there's the sparse_merkle_insert_batch function hanje-san: Could you quickly tell me why it takes 3 dbs? db_info - latest root, db_smt - the whole tree, db_roots - the record of the roots maybe we could enable setting MAX to have them ignored Great They could be options if you want to ignore them But it's fine for now ok Will it be a problem if a root repeats itself? no, it just gets overwritten (updated) in db_roots e.g. Add leaf, remove leaf <- this reverts a root ok : brb : b hanje-san: Hey do you have a bit of time for some contract help? : @draoi pushed 3 commits to master: 498a88e3b9: p2p: reorder shutdown sequence to reduce lag on CTRL-C : @draoi pushed 3 commits to master: 8dad8dfc8f: lilith: add missing safety check : @draoi pushed 3 commits to master: 6b29e8c659: net: make blacklist settings more configurable + other fixes... the 5-15s lag on CTRL-C seems to be darkirc related rather than net since i can't reproduce it on other binaries few other weird darkirc behaviors: repeatedly prints info about channels etc ... i think they are the main things brawndo: hey will go for a run, back in an hour draoi: Well the shutdown can't be instant, the dbs have to flush and all hanje-san: on those lines you're using pattern matching on a method call to get the string value from the expression. Then pushes the Witness struct into ret? As for improving, not sure, include error checking? That's what TryFrom seems to be about if you want a new method for VarType to be created from a str, then you'd need a new impl block for that? But it's already inside one. Will think about it some more brawndo: back Hey Couple questions deki: wrong answer, here's a clue: code duplication For staking RLN coins, my idea is to just do it alongside a Money::Transfer the code is correct, it can be just improved The output coin would contain the spend hook of the RLN contract brawndo: aha, nice that's how staking is done Then it would not be spendable until unstaked the RLN enforces that *RLN contract Q: In the RLN contract, how can I make this a requirement: The tx contains a Money::Transfer with a single output lets call it RLN::unstake() It's RLN::Stake() (since the spend_hook is a func_id, not a contract now) Right, yeah ah you mean, when staking, how do you ensure there is only a single output I mean both now that you reminded me of this :D Yeah so: unstaking is easy, i'm not sure we have any mechanism for staking constraints 1. When staking, we create an output that has a spend_hook to RLN::Unstake() 2. When unstaking, we remove that spend_hook correct, so you mean RLN::unstake() checks that right? (or set it to whatever the user wants) Yeah So I've been writing stuff, but again got a bit confused with the tx call tree, children/parents and whatnot why must it have a single output? (just so i have the full picture) I mean a single output what will become the staked coin There can be other irrelevant ones, sure i don't see the need for this when you make an RLN, you use a single coin, and you prove the coin's spend_hook is RLN::unstake() Anyway I was thinking, probably the Money::Transfer() call can be a parent of RLN::Stake() you don't need to explicitly check anything when staking, just when you use the coin, you check there it is formed correctly (otherwise rejected) But I'm unsure how to access this within the state transition Don't bother with that, you don't see the entire picture I'm just asking about this :) Would the parent call be like calls[call_idx - 1] ? ok right now we don't have any mechanism for this, but i believe it's the same thing as just explicitly checking wherever the coin is used (including in RLN::unstake()), that the coin meets those criteria. for RLN::unstake(), it's easy to do any checks on the coin or tx formed think of RLN::stake() workaround, as if in your functions, how we do lazy initialization - you simply put that code wherever the coin gets used to check it's correctly staked otherwise reject I think he needs a code example on how to match the tx calls, like in propose and xauth Yeah ok upgrayedd: It'd be like calls[self_.parent_index], right? darkfi/src/contract/dao/src/entrypoint/auth_xfer.rs:139 ty this checks the sibling call, for parent, see darkfi/src/contract/money/src/entrypoint/transfer_v1.rs:61 you see we just calculate the func_ref and pass it into zk, where it's committed to the coin brawndo: parend should be the stake and child the transfer yeah but there's nothing enforcing when you stake a coin with money::transfer() that the parent is money::stake() upgrayedd: ah so I'd access call_idx+1 ? it's the spend_hook, not mint_hook we could add a mint_hook too if needed iirc correctly yeah *nod* actually no, child go first so call_id-1 but I think you have access to call child directly hanje-san: Second Q How can I check if an arbitrary Fp is in the SMT? in ZK or not ZK? Not in ZK Do I do Fp.as_biguint() and then smt.get() ? darkfi/src/contract/money/src/entrypoint/transfer_v1.rs:170 .get_leaf() either it is the empty_leaf (not in the tree), or it is your value ah perfect ty ++ glad to helpu arigato hai ACTION bows excessively lol hanje-san: I see, you mean the code duplication with all the Witness structs in the match arms? Only diff is the typ field I can try refactoring it and see what you guys/gals think, assuming that's the part that can be improved. Only need to use match pattern on the typ field anyhow I gotta go to sleep hanje-san: did you add the types stuff? Anyone know of a protocol for sharing encrypted information beween a Rust L1 and Ethereum? W: please write random questions at #random ohayo speaking of nvim - I have been using CoC for some time with golang with the move to rust, I was wanting to try smth else, without nodejs, assuming bloat behind it (besides, nodejs dep mgmt is awful, and probably nobody checks security implications) not sure I made much improvement tho with my latest config using nvim-lspconfig with rust_analyzer, hrsh7th/nvim-cmp and deps, simrat39/rust-tools.nvim and some other tools, feels a tad slower though what are people here using? loopr: whatever makes them more productive sure, that's indeed my goal ofc i would prefer not to have to go back to coc but feels like I should be able to getter a better experience out of my current setup well, imho you should focus on eliminate the need to use such tools all together but people are not ready for this hot take XD uuh yeah that's a tough ask how do you navigate the code then oh, maybe you're just hinting at completion > how do you navigate the navigate the code then check where it was imported from -> find corresponding file -> open it best way to quickly learn repo structure hmm that may help with the repo structure but I feel it'd lower *my* productivity, especially when having to jump around a lot and rust seems to require a lot of such jumping like, all the traits implementation and usage and gd is opening the file for me but yeah, sure, at the price of bloat and config complexity well as I already said: everyone uses what makes them more productive ++ I'm so used working with the software I use I really don't see the point anymore in such "luxuries" a click to open will save me what? 0.5s? big deal... Until you know the codebase it's more...go to dir, open, search definition... used to do this for a while too happy it works for you Once you understand the codebase then ripgrep + telescope is really good to find code. It is a convienent way of jumping around in neovim from file to file but I think even VSCode lets you do that? never tried it : @skoupidi pushed 1 commit to master: 930a511309: blockchain: major hashes cleanup lmao yeah Even with vim, I :q , cd, vim foo : re tooling : try to "oxidize your setup" ;) hx, skim, ripgrep, cargo-binstall, bottom, ... : but vscode is nice with remote debug and all the plugins, true ... where should I share an impl doc proposal for discussion? pastebin? agorism uploads? gist? : https://dev.to/yjdoc2/completely-oxidizing-my-terminal-setup-43d8 : Title: Completely Oxidizing My Terminal Setup - DEV Community : https://www.youtube.com/watch?v=dFkGNe4oaKk : Title: Your Command Line, Oxidised - YouTube new term: oxidized setup... unbelievable but it took me a while to grok the pun hanje-san: https://pastenym.ch/#/u9oQ9NCC&key=de8fcd9ed99aac0f5fce38bf97b11e42 gm deki: correct upgrayedd: sorry i forgot, will do it today (noted down), also have to read SMT ethresearch article airdrop69: does telescope work on opened buffers (see :ls)? i don't need fzf for opening files but for simply doing :b N where N is the buffer name. I have to look through :ls to find the buffer i want loopr: you're halfway there let vcv = ValueCommitV; let vcr = OrchardFixedBasesFull::ValueCommitR; let vcr = OrchardFixedBasesFull::ValueCommitR; let vcr = ConstBaseFieldElement::value_commit_r(); let nfk = ConstBaseFieldElement::nullifier_k(); all of these need to be generalized lets look at darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:110 pub struct ValueCommitV; darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:203 impl FixedPoint for ValueCommitV { it has methods u() and z(), so those values should be data .etc for the others hanje-san: okay I can do a refactored version so the Witness struct isn't repeated, and just catch the typ field. See what you all think perfect ty, will do it this weekend, have to head out soon nice enjoy mate hanje-san, do you mean something like this? https://imgur.com/a/zzvekNL if yes then :Telescope buffers ah yeah nice i don't like these nvim popups, idk what's wrong with the command bar for this, anyway will investigate I believe telescope can exist in the command bar. I am just lazy to change it ah nice ty that's a virtue (laziness) :> linus said so I thought it was Bill Gates who said something that they hire lazy people to get work done (idk could be false internet information cause I didn't verify it or cared much to do so) ah that's why windows sucks, makes sense haha yes ;) oh that reminds me, gotta spend some time again to try and passthrough my GPU to a windows vm. Then I can be completely away from windows on bare metal. any experience with that? i think a upgr dual boots for gaymes you should just play tuxracer lol xD I just want to play rdr2 on linux, figured out how to play all of my other games here using wine except for that one game and I need windows "come to linux we have amazin gaymez like tuxracer, OpenTTD, FreedroidRPG" *dual boot with templeOS (games) and linux (productivity) s/productivity/trolling/ hah jokes on you, put me on any OS, I will troll. does anybody know where the log file is initialized in darkirc? I want to add it to the toml settings as well maybe util/cli.rs? idk tho, i've been using tee POV: you're henry VII and you say "who will rid me of this meddlesome priest" and some overzealous knight kills him that's like me and this task put it on tau i heard it's more stable since the net upgrades we can start testing it with dev tasks i had some issues but will try again gm hanje-san: The log is not written to files by default for privacy reasons hanje-san: And should not b e hanje-san: darkirc --log-file (or --log, I don't recall) yes i'm saying can we add the log setting to the config toml as well? but agree off by default (as should be) It's supported in the config just not in the toml file You can add any Args param in there add in it your local config, no need to be in the repo/default one and if it gets added there, always commented upgrayedd: --log is in the CLI args but not supported by toml, nobody is saying to enable logs by default hanje-san: check bin/darkirc/src/main.rs::60 everything there is both cli and toml args ahhhh i was so confused trying to figure this out the difference is how we parse them since log is parsed as part of the daemonize macro its "pre-used" when it reaches the daemon since the macro has already setup the log config how are the [channel.] sections added to Args? using -v and --log they are part of the toml, but get parsed differently check bin/darkirc/src/settings.rs::153 aha ok so you can define things using Args or do it manually, gotcha yeah exactly nice thanks a lot the toml parsing dep we use has its quarks so some stuff work out of the box, but some others needs different handling Funnily enough, in Rust there isn't a single TOML parsing library that doesn't use serde Eventually I'll make bindings for this https://github.com/cktan/tomlc99 : @zero pushed 1 commit to master: 91bc56bee6: darkirc: add commented log and verbose settings + description to default toml config hanje-san: btw can you fix the broken features? sure. make test, or what? make check it takes hella time I know so to cheat start from: ok np, i'm adding your doc and reviewing net so got time cargo check --release --target x86_64-unknown-linux-gnu --no-default-features --features async-serial,zkas (on repo root) thanks make check verifies all feature combinations work so with latest moves some of them got rekt in general, the repo must always pass make test and make check, along with make clippy make clippy without warnings make check might produce some import warnings, but thats easily fixable yeah i run them periodically but forgot recently nw, make clippy is the main one that must pass btw that command doesn't seem right, i get lots of errors not related to code i touched so other devs don't get blocked about AsyncDecodable i just ran `make check` and get: well it broke on commit b9cc42cdf4499cb904e3364698b125db4e8119ae RUSTFLAGS="" cargo +nightly hack check --target=x86_64-unknown-linux-gnu \ --release --feature-powerset --workspace error: no such command: `hack` cargo install hack ah $ cargo install hack error: could not find `hack` in registry `crates-io` with version `*` sec cargo install cargo-hack ? works ty : @zero pushed 1 commit to master: 416b236715: Makefile: add comment about installing cargo-hack above check target. upgrayedd: i think it's incorrect that your Header struct has the tree inside of it it should just have the merkle root i'm going to leave alone the nonce at 64 bits, i think it's safer well not safer, just better hanje-san: it originaly had just the root, don't really remember why we added the whole tree probably for easier building https://bitcoin.stackexchange.com/questions/4565/calculating-average-number-of-hashes-tried-before-hitting-a-valid-block upgrayedd: but then your SerialEncodable is saving the entire merkle tree along with the header so if i use it on the network, then it will send the merkle tree too yeah understandable which will be different for each person if they witness different things .etc brawndo: do you recall if that was something from PoW experiments? since the change was made in preparation for PoW anyway i just change the doc not impl for now you can calculate the tree from the block: file:///home/narodnik/src/darkfi/doc/book/arch/consensus.html#block oops hanje-san: yeah add the definitions and I will make the impl changes ok have you considered naming it BlockHeader? Header name might conflict with other code stuff nbd tho nah too much info You have to maintain the Merkle tree for the block _somewhere_ well you can always rebuild it and check the root matches the builded one during verification Yeah yeah rebuilding is good You have to do that always why always? whenever you want the tree you just do it once you generate the block to grab the root to add to the header and then when you verify the block is valid That's always :) not too much hustle tbh yy so better to just have the root, as its the only one required I'm saying it should always be calculated yeah we just need it in those two places: when you build the block, and when you verify it yeah it's not a big computation hanje-san: also change that in the header definition and I will handle the rest yep done hanje-san: It is, you have to serialize each tx, and then add it to the tree in order yeah sure i wouldn't do it all the time, but it's not catastrophic, can be done infrequently Every block ;) pretty much ;) : @zero pushed 1 commit to master: 9ecfb0dd58: doc/consensus: update types info in tables. so about those `make check` errors, I don't think it's anything I did I get these errors: error: cannot find attribute `async_trait` in this scope but opening dark_tree.rs (one such file), I see: #[cfg(feature = "async")] use darkfi_serial::async_trait; so why would SerialEncodable macro be giving this error? either async is enabled, which then async_trait is enabled, or it isn't and SerialEncodable is just sync cargo check --release --target x86_64-unknown-linux-gnu --no-default-features --features async-serial,zkas just verifying if it broke on b9cc42cdf4499cb904e3364698b125db4e8119ae ah well it did, can't argue with facts lmao src/sdk/src/lib.rs::55 what is #[macro_use] ? wait b9cc42cdf4499cb904e3364698b125db4e8119ae works for me that #[macro_use] just propagates the macro in entrypoint.rs upwards (otherwise it can't be used) i'm doing a bisect what did you run? dcf419b0ca839c359674ed79cb726c242c6489c8 this is the problem commit (also me) aha noice glhf fixing it :D yeah in that commit you introduced new feature combos since you added darkfi-sdk to zkas that seems to be the issue ok needs async-sdk where is this specified? i don't see anything in bin/zkas/, nor under check target in Makefile oh it's trying every combo of features, so maybe i have to do something in the code to handle this?? dakkfi-sdk to zkas? yep async-sdk rather i mean darkfi-sdk is already in zkas Makefile features, but when compiling async-serial,zkas features, it gives an error cargo check --release --target x86_64-unknown-linux-gnu --no-default-features --features async-serial,zkas but async-sdk,zkas works so i'm not sure how to fix Can you revert that? why? zkas should be standalone, no deps it just deps on our SDK because i moved iterators into there Yeah but it pulls everything and takes much longer to compile : @zero pushed 1 commit to master: 1a1a26e396: Revert "sdk: move find_subslice() and NextTupleN from zkas into SDK util.rs"... afk cya]\ : @skoupidi pushed 1 commit to master: 9d64403407: zk/debug: properly structure feature imports hi hanje-san: thanks for darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:203, actually I bumped into those at some point what I fail to see is how they are being called in src/zk/vm.rs:654 also, when you say "should be generalized", you mean specifically that generics should be used? loopr: https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/sdk/src/crypto/constants/fixed_bases.rs vm.rs:647 loop There the generators are used brawndo: thanks, I saw that loop what I don't see is the link for VALUE_COMMIT_VALUE resp VALUE_COMMIT_RANDOM with those generator resp u and z fns for the other two it's value_commit_r() resp nullifier_k() brawndo: could it be they are actually missing there and they should be added? can't see any Default constructor either greets hey heyo hey : hihi : It seems my darkirc node has been stable through network outage too these days : Good work :) : very cool : I have an external addr set and it's reachable : Then in the case there's network outage, people would be unable to connect to be : But I think then when I reconnect to anyone, the address gets propagated again? : They wouldn't blacklist me? : they won't blacklist you : worst case scenario they will forget your node i.e. delete from greylist, but you will reshare again once you reconnect : Sweet : I assume the same goes on my side. Everyone becomes unreachable, and then they end up in greylist, and it keeps trying until I get network access again : And/Or alternatively it would just try the seed(s) again : yes exactly, and we try to detect when we go offline and switch off the refinery so that it doesn't delete peers when we are offline : if we haven't had any connections for > 30s, refinery is switched off : Nice : Is the 30s hardcoded? : ah no I see it : yeah so it's kind of a balance between the refinery interval and the time_with_no_connections interval : Really cool : rn the refinery default is 15s, time_with.. is 30 sec, so would delete max 2 potentially healthy peers : we could tweak for different connections perhaps : :D : Yeah maybe : I'd think mobile connections could be more flaky : ++ : Sometimes you'd get an interrupt when switching towers or so : But that's a normal roaming issue : Also likely less of a problem since you probably won't be running a public node that way haha : true : For some period of time, tmobile in NL had ipv6 over 3G : So you could host stuff : But IPv4 is usually firewalled by the operator : wow nice : @draoi pushed 1 commit to master: 2b18e5307b: session: cleanup SessionBitFlags... : sed -i -e 's/SESSION_ALL/SESSION_NET/' bin/darkirc/src/main.rs : that's a good idea : @draoi pushed 1 commit to master: f9f3fa2bf1: session: cleanup SessionBitFlags... : soz just forgot to commit that file : ^ appended note to devs: nightly is broken again so use 04-05 hanje-san: here? hey hanje-san: looking at 9ecfb0dd5803620c149f7be8e63e7943141d4897 tx_index as u16 makes sense for call_index we can further reduce it to u8 since we cap tx calls to 20 ok sure check src/tx/mod.rs::226 aha yep ic kk gg so it will be u8 nice thx for telling me tx_index can also be brought down, since we have the 50 tx per block cap but I guess that will change once proper fee cost is added I wouldn't, since we should later have blocks sized on gas rather than n_txs u16 gud btc blocks can have like 4000 txs brawndo: yy won't change it btc irrelevant :p will just do block height/index to u32, tx_index to u16 and call_index to u8 true what should i look at as a ref instead? sol? its a big change so bear with me sol/eth oki big as in it affects a lot of stuff everywhere but no worries will handle them easily thats all, thanks for listening to my podcast, like, share and subscribe cu on next episode ACTION liked upvoted :D lol admin: wer subscribe button : @dasman pushed 1 commit to master: 214458322a: add deg2 (dag_browser) code : draoi: I have this issue in tau, running taud locally I can connect to my seed and get the peers (which is only one and is mine) when hostlist is saved and I'm trying to rerun taud it doesn't connect to my saved peer, nor does it get from seed again : Weirdly enough i don't see the same issue in darkirc : My peer is saved as gold btw : Removing hostlist or commenting it out all together, or connecting manually through peers=["tcp+tls://dasman.xyz:23332"] works if anyone have this issue hanje-san: does this look good? https://imgur.com/EvowAjM from two days ago : that should never happen : can you DM me the log? gm dasman: have you ever used tig? gm what do you think of this implementation for zkas/parser.rs with the code duplication hanje-san: https://pastebin.com/HXyZrAHb Title: ret.push(Witness { name: k.to_string(), typ: m - Pastebin.com 1. cargo fmt is your friend, you should run that 2. code indents ideally should only go 3 indents in max 3. there is a lot going on there. try taking type out of there as another variable. 4. the comment says you can add a TryFrom impl to VarType directly then just do token.into()? which is much shorter okay thanks for the feedback, will look into it now draoi: https://github.com/tokio-rs/loom Title: GitHub - tokio-rs/loom: Concurrency permutation testing tool for Rust. : @draoi pushed 6 commits to master: b631a10629: channel: only print disconnect errors when we're on SESSION_NET... : @draoi pushed 6 commits to master: 8a413d1c3d: refine_session: reorder start(), shutdown sequence... : @draoi pushed 6 commits to master: ec688f485a: p2p: start refine_sesssion() before outbound_session()... : @draoi pushed 6 commits to master: 182efa4b46: p2p_test: slightly more expansive testing : @draoi pushed 6 commits to master: 69470ff9b2: doc: fix incomplete debug statements on hosts.rs : @draoi pushed 6 commits to master: 7870d006d2: chore: cargo fmt hanje-san: no I haven't, but looked it up, I thought people would want to browse the dag like that : @draoi pushed 2 commits to master: fe0801bcf1: outbound_session: fix bug causing nodes to get stuck in peer discovery... : @draoi pushed 2 commits to master: 225e9bbd72: chore: make clippy : dasman: fixed ^ : awesome! Thanks dasman: install tig and use it, look how they do it ++ gm gm, happy total solar eclipse day gm !topic darkfid threadpools Added topic: darkfid threadpools (by hanje-san) gm draoi: darkfi/src/net/p2p.rs:64 can you make preference an enum? darkfi/src/net/session/outbound_session.rs:200 0, 1, 2 doesn't say much. you have comments describing them, but ideally they would have a name instead. then you don't need _ => { panic!(); } branch either ++ you're reviewing the code now btw? also the docstring for fetch_addrs_with_preference() should have empty lines with ///, the blank lines might error yes ok just put all comments here and i will get to them ty, will slowly work through this week... got other concurrent tasks too : @zero pushed 1 commit to master: af8f1e9d20: net: remove public visibility from outbound_session::Slot, and correct broken docstring can you convert all the NOTE: comments to proper docstrings? also the methods wait() and notify() should have docstrings PeerDiscovery docstring is incomplete, it says sends out GetAddrs, but it is also doing seed. You should write exactly what it does there. check all TODOs in net/ dasman: https://agorism.dev/uploads/screenshot-1712558458.png draoi: darkfi/src/net/p2p.rs:47 (line 49 also) : @zero pushed 1 commit to master: c2c967a673: net: p2p.stop() calls channel.stop() on all channels draoi: ^ i need you to verify this works correctly b631a10629f (draoi 2024-04-07 09:37:55 +0200 191) if self.session.upgrade().unwrap().type_id() == SESSION_NET { darkfi/src/net/channel.rs:282 this line is broken https://dietertack.medium.com/using-bit-flags-in-c-d39ec6e30f08 Title: Using bit flags in c++. An Introduction to Using Bit Flags | by Dieter Tack | Medium darkfi/src/net/protocol/mod.rs:76 !topic rln contract status Added topic: rln contract status (by brawndo) draoi: instead of SESSION_NET .etc, maybe instead do SESSION_ALL - SESSION_REFINE, .etc (you need to lookup how to subtract bitflags) i think it's SESSION_ALL & ~SESSION_REFINE or sth What do you need to do? it's just bitflag stuff, draoi is using them like values instead of bitflags draoi: https://stackoverflow.com/questions/44690439/how-do-i-print-an-integer-in-binary-with-leading-zeros Title: rust - How do I print an integer in binary with leading zeros? - Stack Overflow https://doc.rust-lang.org/std/fmt/#formatting-traits Title: std::fmt - Rust "{:032b}" SESSION_NET == SESSION_INBOUND|SESSION_OUTBOUND|SESSION_MANUAL|SESSION_SEED it seems there should be a SESSION_ALL = 0b11111 defn, then do SESSION_ALL & ~SESSION_REFINE, or SESSION_ALL & ~(SESSION_SEED|SESSION_REFINE) ah the goal there is to test if it's any of the sessions except SESSION_REFINE? yep Yeah you need a SESSION_ALL and then flip the unwanted one ++ draoi: some unsafe methods like darkfi/src/net/hosts.rs:947 should be marked with visibility inside net only you are already doing this in darkfi/src/net/session/refine_session.rs:58 it's important to go through all the methods and think about what is your public API, internal API and private functions, then mark the methods correspondingly !topic lilith upgrade Added topic: lilith upgrade (by hanje-san) : @zero pushed 1 commit to master: 425df4c8cb: hosts: fix macro calls formatting : @parazyd pushed 1 commit to master: bf1303b6f3: drk: Stub function for contract deploy authority generation : @skoupidi pushed 1 commit to master: d9304c15cd: runtime: changed call_index from u32 to u8 hanje-san, brawndo: lmk if I missed anything : @parazyd pushed 1 commit to master: 509d9bf0d4: net: Fix bitflags Weirdly SESSION_NET was used for arbitrary protocols But that covered the seed session as well upgrayedd: checking i changed the bitflags here: 2b18e5307b9878e74efbb9d65e935d797a9fe58c previously we were using SESSION_ALL for those protocols which includes seed session so I believe SESSION_NET (which is SESSION_SEED & SESSION_OUTBOUND & SESSION_INBOUND) is correct are you responding to me? i am responding to 509d9bf0d442f5e34107ed7ed6b75dba122af4e8 : @skoupidi pushed 1 commit to master: 1a0f997f28: contracts: simplyfied call index usize usage ah ic upgrayedd: lgtm draoi: you could do SESSION_SEED & SESSION_DEFAULT which might be clearer, or SESSION_ALL & ~SESSION_REFINE oh nice it's !SESSION_REFINE, not ~ I don't think you should be registering protocols like the event graph to the seed session ++ Also where you wanted to test if something is SESSION_NET, it doesn't always have to be the case You likely want to test if any of _MANUAL, _SEED, _OUTBOUND, _INBOUND is enabled if self.session.upgrade().unwrap().type_id() & (SESSION_ALL & !SESSION_REFINE) != 0 _NET is not a valid value for channels. each channel can only have a single bit enabled in the bitflag Yeah I removed SESSION_NET : @skoupidi pushed 1 commit to master: 0318720cd3: blockchain/tx_store: changed location tx_index from u64 to u16 hanje-san: for the TryFrom impl for VarType in zkas/parser.rs, doesn' *doesn't this mean I'll need to add a new impl block for TryFrom? Similar to this: https://doc.rust-lang.org/rust-by-example/conversion/try_from_try_into.html Title: TryFrom and TryInto - Rust By Example where I'll have the match for EcPoint, EcNiPoint etc? yes !list Topics: 1. darkfid threadpools (by hanje-san) 2. rln contract status (by brawndo) 3. lilith upgrade (by hanje-san) upgrayedd: Would things work fine on a single node if I start and stop minerd at various times? I want to avoid mining all the time while I'm doing dev stuff, but then will the txs live in mempool while I'm not mining? brawndo: mempool is stored on disk in TxStore.pending tree so closing down should be fine when you close minerd darkfid will error out iirc, I haven't cleaned/handled that stuff in the node yet Yeah so I can submit txs, and then start minerd at any point and expect those txs in the next block? ah, ok Will wait on that fix then, have a few more things to finish up in the rln contract anyway well since I assume you use the contrib/localnet/darkfid-single-node/tmux_sessions.sh script you can simply shut that down and restart it since native contracts would have been deployed on first run, consecutive runs should be fast af so the node will pick up the unproposed txs in the node Not ideal :) I would suggest tho to increase the mining target and/or buffer size, so they are not instantly included in a block because the dafault config is like 20s with 1 block buffer so every block is instanlty finalized since remember fork blocks are in memory, so shutting down means they get lost Yeah I don't want to shut down the node though Just the mining part while I don't need it yeah I got it perhaps just write a script to inject tx to the pending store directly? so when you restart they will get picked up but anyway I'm suggesting duck tapes XD ;) No rush brawndo: here? biab lunch upgrayedd: b brawndo: it was a silly confirmation context: block height went from u64 to u32 we are using it in the miners secret deriviation, but Fp::from doesn't work so the question was if its ok/safe to use (height as u64).into to get the pallas to use in the poseidon hash Yes for sure : @skoupidi pushed 2 commits to master: c69732379e: script/research/gg: updated to latest darkfi structures : @skoupidi pushed 2 commits to master: 9f5e6aafc4: blockchain: changed block height from u64 to u32 hanje-san: whats the prefered depth for the txs merkle tree? right now we use 1 1? It should be MERKLE_DEPTH_ORCHARD that's for coins, and nullifiers is 256 Is depth of 1 a better choice there? I suppose there's less hashing involved? wait wait my mistake its the max_checkpoints tree = MerkleTree::new(1) means max checkpoints of 1 right? though it was the depth Yeah 1 is good That means you can revert adding your coinbase tx easily well it should be 0 since we rebuild it, we don't store it anywhere As a builder I mean There's no downside to having 1 there well yeah just saying will leave it as 1, and we might add some handling in the future, if ever needed gm gm hanje-san, brawndo: block structure is as per doc/src/arch/consensus.md spec, cheers Noice yeah hanje-san i'll have to finish my implementation for the VarType tomorrow, will update then, need to get some sleep no rush, be relaxed ;) k :> so you guys are the Devs of darkfi? This is where the devs are, yeah hi o/ sup hello hihi holla o/ !start Meeting started Topics: 1. darkfid threadpools (by hanje-san) 2. rln contract status (by brawndo) 3. lilith upgrade (by hanje-san) Current topic: darkfid threadpools (by hanje-san) great. we should in darkfid, use different executors running in separate threads for the database/consensus and net code the netcode uses async which is built for many small tasks spawned and closed whereas sled assumes control of the thread its running in (and is built for that) by spawning N CPU threads, creating 1 executor and running it in all those threads, we don't gain the benefit of async ok yeah, for now it could just be 2 executors, one main and one net in the future we could get more fancy with prioritization and stuff but for now that's probably too much. I'd consider splitting sled/consensus in one threadpool, and net in another. everything else can go with net or a third threadpool. async_daemonize macro setups the executor, which is the same regardless of app so if we want more than one it should be created there also this stuff is really subjective how you allocate threads to threadpools this concerns darkfid mainly /fin db/consensus would have just one thread? Not necessarily couldn't that lead to sync issues if there are multiple the OS is quite good at scheduling threads so you could just spawn a bunch loopr: we already have multiple threads Yeah in the future the net code could do stuff like deprioritizing certain tasks like refinery, while marking certain tasks high priority. there would be internal executors for prioritizing tasks specifically thinking of db writes/reads should i click next? just wanted to highlight this point but it's low(er) priority and an ez patch when we want to make it ++ We should just do it !next Elapsed time: 10.5 min Current topic: rln contract status (by brawndo) XD Rather it will be forgotten we can add it as a TODO on tau :D Yeah we should start testing that good idea darkirc has been stable for me please do, thankyou :D ok rln Will just give a quick rundown i tried to add the task "You don't have write access $ tau add project:darkfid "split executors into separate threadpools for consensus/DB and net" !topic tau debugging Added topic: tau debugging (by brawndo) after it's done, i'll look at RLN. curious how deploying works too The idea with RLN is (optional) spam protection on IRC. People would have to stake network tokens in exchange for making an anonymous account that can be used to chat. The staking works as follows: - There is the Money Contract, and there is the RLN Contract - The RLN contract has 2 functions: Stake and Unstake - A user does a Money::Transfer + RLN::Stake in order to stake some tokens - The Money::Transfer should result with one output that has a spend_hook set to RLN::Unstake so that it cannot be spent - The output should also be tied to the RLN identity - When staking, all coin attributes except the "public key" are revealed. IMO this is not an issue. where is the RLN code? RLN::stake() is not mandatory in your tx - By constructing the coin as such, in combination with RLN - once an account is slashed (i.e. its secret keys are obtained) - the coin and its nullifier can be fully constructed and unstaked. hanje-san: https://codeberg.org/darkrenaissance/darkirc-rln/src/branch/master/src/entrypoint/stake.rs Title: darkirc-rln/src/entrypoint/stake.rs at master - darkrenaissance/darkirc-rln - Codeberg.org ah its a branch ty It's a repo RLN::Stake does not have to be enforced anywhere outside of RLN::Stake function Once it's called, that's indended, it doesn't have to be forced thanks, this is exciting - If the staking state transition passes, the identity account is added to a Merkle tree and the user can now produce an inclusion proof in order to prove they're allowed to chat Unstaking works as follows: - Any holder of the identity secret can construct the staked coin s,secret,&s, would you copy pasta this to the book? just these bullets are good this should be presented as docs. - By being able to do this, the holder can call RLN::Unstake and this should allow moving the coin and removing its spend_hook (which links to RLN::Unstake) - If the state transition passes, the identity is removed from the Merkle tree, and further inclusion proofs will not be valid . sounds good hanje-san: The repository will contain the documentation, I'm just giving an overview to everyone I have a few big picture questions, mostly curiousity It's useful to discuss HCF: btw we changed DAO so it makes arbitrary contract calls now a) This is 'spam' at the network level, or at the level of 'messages I don't want to see'? hanje-san ah nice! the code was audited and passed HCF: It's on the network level, with an arbitrary threshold b) how will users get tokens? PoW mining or some market somehow? I wonder if this would make it harder for new (non-spam) people to join chat HCF: have you seen how bluesky does moderation? they have optional moderator sets HCF: The latter can be done manually by users, likely through the IRC client c) how does this interface with ability to change/rotate nicks? brawndo ++ about b) we have a free tier, this is for heavy load, idea being free tier is rate limited hanje-san I haven't looked into bluesky HCF: b) Tokens would be obtained through mining or market. We'll also have free tiers likely c) this allows fully unlinkable messages irregardless of nicks HCF: c) It is not linked to nicks. Each message would produce a ZK proof Ah ok rate-limited free tier seems like a really good option yeah it's perfect. who said fully anon & p2p chat couldn't be done? HCF: This proof proves you have a registered account, but it's not linked to anything public OK, figured nicks would not be related (similar to how DMs) work, but wanted to check So you can change nicks at will, but you have to produce valid inclusion proofs ++ This is done by proving you staked a coin and have that identity commitment in the Merkle tree of commitments I saw the audit btw, very exciting. it was nice that they complimented the code style/philosophy too :) I opted to use a sparse Merkle tree so we can add/remove leaves at will cool brawndo sounds like a good design we can invite timon to test it The hard part will be keeping darkirc in sync with the on-chain tree will that require a custom irc client? or how will the inclusion proofs be integrated? loopr: It will be done through the darkirc daemon cool the optional moderation requires an ircv3 extension to be implemented in weechat or a custom client (which we will work on soon) Yeah the msgs are in a tree so I imagine that without the proof, no msgs get into the tree. So it never reaches the client hanje-san: For things to really work, the proofs should always be against the latest Merkle root hanje-san: So we'll have to figure out a stable way to keep things in sync i think we can allow some margin of error the important thing is we have the mechanism if abused, we can adjust params and tighten the screws so don't worry too much ++ Perhaps Any more questions? all good !next Elapsed time: 17.8 min Current topic: lilith upgrade (by hanje-san) draoi: so since refinery is in net itself, does lilith need to ping nodes? lilith has 2 refineries, the greylist refinery that all noes have via p2p, and a whitelist refinery that lilith itself implements s/noes/nodes noes :D this is to ensure that even if lilith is running for a long time we know its whitelist is still valid lol why do we need 2 refineries? why can't lilith just trust the inbuilt mechanism? i trust you if you think it's better shouldn't lilith just be a stupid router? bc when nodes enter the whitelist they are not checked again, and lilith sends them around to other nodes- the reasoning was it's better lilith periodically checks those nodes as well tbh it's not a huge deal tho if lilith sends bad nodes, since they will go via our greylist refinery anyway why don't we check whitelist nodes? aha ok bc the whitelist is nodes that have already been checked ok lilith is very small anyway i'm ambivalent tbh, i don't think it adds much for lilith to refine it's whitelist occasionally, but nbd 443 lines of code so it's fine ok !next Elapsed time: 4.4 min Current topic: tau debugging (by brawndo) ok something was broken hanje-san? tau add project:darkfid "split executors into separate threadpools for consensus/DB and net" You don't have write access it is also bc lilith does not make outbound connections. for a normal node, if it makes an outbound connetion to a whitelist node that is no longer online, it will get downgraded to greylist. but lilith does not have this mechanism since it doesn't make connections "darkfi-dev:2bCqQTd8BJgeUzH7JQELZxjQuWS8aCmXZ9C6w7ktNS1v", draoi: aha makes sense ty ok resolved in DM !next Elapsed time: 1.5 min No further topics :D XD Sweet so it is done? Looks cool. My first time to meet like this way. have a couple of Qs regarding the constants tnx all welcome taryou yes this is a standard dev meeting started a few impl attempts, but I wonder, can the FixedPoint trait be changed at all? das ende ist der anfang draoi: thanks. loopr: yes ofc, they must all be changed in fixed_bases.rs ty everybody have a nice day hanje-san: uh perfect german grammar generator / u / z are data inside the class. generator is the (x, y) passed from zkas, the others are calculated using find_zs_and_us() fn just we provide some precoded constants (the ones already there) so use enums !end Elapsed time: 4.5 min Meeting ended ty all, gg hanje-san: there seem to be references in halo2_gadgets or something, that's why I am asking some other FixedPoint impl also: https://codeberg.org/darkrenaissance/darkfi/src/commit/cf7b3c8c61ac2e1b390ce71612d611b49a0e32e9/src/sdk/src/crypto/constants/load.rs#L96 Title: darkfi/src/sdk/src/crypto/constants/load.rs at cf7b3c8c61ac2e1b390ce71612d611b49a0e32e9 - darkrenaissance/darkfi - Codeberg.org vs https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/sdk/src/crypto/constants/fixed_bases.rs#L110 Title: darkfi/src/sdk/src/crypto/constants/fixed_bases.rs at master - darkrenaissance/darkfi - Codeberg.org darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:203 for ValueCommitV yeah but why are there two diff ValueCommitV structs because it needs cleaning up ;) :) so I can remove the ones in load.rs? is it used? not as far as I could validate so when you delete the code, and run make test, it works? you can compile? understood, I haven't yet deleted anything, so will first go through these steps, and if all good, commit `make clippy` helps 1. check if it's used, gather info 2. make proposal 3. ask here No rule to make target proposal :p some questions about staking for irc: How long is a staking/unstaking period? haha SIN: For as long as you want. Though we should likely have some grace period for unstaking, e.g. 2 days or whatever https://rate-limiting-nullifier.github.io/rln-docs/smart_contract.html#withdraw--slashing Title: Smart-contract - Rate-Limiting Nullifier Here there are some real-world "issues" that happened So we should keep this in mind ah that's what rln stands for 'their funds being locked to freezePeriod amount of blocks, giving others an opportunity to slash (slashing happens immediately).' nice other question, when staked funds are slashed, where does that stake end? in my wallet ;) the slasher's wallet ++ gm gm brawndo How's it going airpods69? hi o/ i always wake up from lucid dreams as soon as i realize i'm dreaming brawndo, pretty good here. Trying to implement a zk proof that hanje asked me to do so banging my head over that. What about you? hanje-san, what do you think about lucid dreams? are those dreams or your thoughts? I think of it as thoughts I'm working on the CLI wallet, adding client-side functionality for deploying custom smart contracts \ : @parazyd pushed 1 commit to master: f011b02336: drk: Introduce SQL schema for Deployooor contract upgrayedd: We should make drk not always require an RPC connection to a node There's a lot of operations that can be done without it Should we just make Drk::new() take an Option for the endpoint? : @parazyd pushed 1 commit to master: faf96deeaa: drk/deploy: Implement deploy auth listing : what is this : ah the halloy protocol impl is kinda non-strict : it doesn't prepend a ':' character if there are no spaces : nice, fun little client Imagine writing more code with the goal being to follow the protocol less i wanted to see if it could be patched for ordering messages based on timestamp brawndo: yy was gonna make that change anyway, so you can init wallets and stuff without needing the rpc endpoint gr8 brawndo: here? : @skoupidi pushed 1 commit to master: d3ab4be743: drk: make rpc_client optional so its not required it in every operation yo Yeah great Thanks for doing it brawndo: just keep in mind that some operation still require the rpc calls, namely to tx operations, since they need to retrieve the corresponding zk stuff from the node Sure, we just use common sense for that stuff well sure, just saying, as common sense is not that common lately :D haha : @skoupidi pushed 2 commits to master: 96af06da04: contrib/localnet/darkfid-single-node: updated README.md to reflect latest drk changes : @skoupidi pushed 2 commits to master: 722c786157: darkfid: use a hot-swapable JSON-RPC client to handle errors while communicating with minerd brawndo: you are now able to shutdown minerd at will, while darkfid still runs obviously the caveat being that with a single node, block production will halt, until the minerd is back up and darkfid re-establishes the connection, in order to produce/mine blocks in multiple nodes scenario, the node still follows the rest ones, aka not halting, just can't produce/mine on its own, until its minerd is back up it will just keep retrying to connect to minerd hanje-san: https://codeberg.org/darkrenaissance/darkfi/pulls/252 "Implemented generalisation for generator constants access" am prepared for your critique, as I assume you won't like much the global variable I used but couldn't come up with a better idea for now yy that's the plan Thanks a bunch side note: haven't been able to get tor-browser to work with my hw security key for login, it doesn't access the key, while it works on other browser if I'd like to run a tor node, is there any diff in terms of safety if I run it at home or on a vps? I would think on a vps is better as the home can obviously be related to me (the vps too but maybe not *that* easily?). loopr: you need to patch zkas so it can be used also you don't need any globals maybe start with patching zkas first so you can specify the (x, y) for generator consts about tor: running it on a vps is insecure. for full anonymity you should run it locally well only as secure as the vps (which if you trust a vps, then why use tor?) hosting in a country with better privacy but sure then I prob don't understand what you mean by "patch zkas" i thought globals in this case wouldn't be too bad as it's just constants gm loopr: EcFixedPoint MY_CONSTANT = (0x07f444550fa409bb4f66235bea8d2048406ed745ee90802f0ec3c668883c5a91, 0x24136777af26628c21562cc9e46fb7c2279229f1f39281460e2f46c8a772d9ca), inside .zk files so that means fix zkas so specifying arbitrary constants for use within that file adding support to zkvm in zk/vm.rs lastly modifying sdk fixed_bases.rs so you can specify arbitrary generators using (x, y) then derive the other params using find_zs_and_us() : @zero pushed 1 commit to master: a8a63387db: doc/book: merge duplicate smart contract sections, applying small corrections : @zero pushed 1 commit to master: 9ff7bc4658: doc/book: s/state_transition()/process()/ : @zero pushed 1 commit to master: ccda1a41df: doc/book: fix nullifier derivation N = H(x, C) : @zero pushed 1 commit to master: 5c98b289ca: doc/book: add section on using zkrender tool : @zero pushed 1 commit to master: e1b259c019: zkrunner: regenerate all proof witness json files : @zero pushed 1 commit to master: 4709bfd314: doc/book: zkas/writing-zk-proofs, add comments on witness JSON files : @zero pushed 2 commits to master: 898ed02880: zkrunner: if (value := foo) can wrongly be false if value is 0, so be explicit with "is not None" : @zero pushed 2 commits to master: 441150bbad: contracts: adjust k values to smallest possible Did you test the k change? yes ofc ah interesting I thought it wasn't possible because of the sinsemilla lookup table make sure you remove test-harness/*.bin https://agorism.dev/uploads/layout.png You should update the cache hashes in test-harness most of them are k = 11, and there is something there that stops me putting it to k = 10 you see the big green column in the middle left Yeah that's the lookup table yeah so it's just slightly bigger than k = 10 ok fixing hashes Don't worry about it too much, it's better if we eventually work on the self-optimising algorithm for the zkvm Then it will know when the lookup table is actually necessary sure but it's a low hanging optimization too, but nbd : @zero pushed 1 commit to master: 2d9f5af6b5: test-harness: update pks/vks hashes what would be cool is proof aggregation since we have many zk proofs in a single tx actually nvm, it's only for the same circuit so doesn't gain much hanje-san: is this okay so far for the zkas/parser.rs change? Haven't finished it, but after feedback mainly wondering if the enum VarType is okay to use? https://pastebin.com/vhYTRFhp don't use an error string. Use the darkfi error specified in src/error.rs ah ok, what about using enum VarType? Is that ok because it adds more code? yes it's better brawndo: all actions are failing, and github is saying node16 is deprecated, that projects must now indicate support for node20: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ ok sweet, was worried that wouldn't be allowed. Will try to implement the rest i tried reading that link but it's so damn confusing why don't they tell us the goddamn stupid flag they want us to add oh wait i got it now - uses: actions/checkout@v4 i think this should be v4 : @zero pushed 1 commit to master: bddc2b5e49: github workflows: update to v4, see https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ : Title: GitHub Actions: Transitioning from Node 16 to Node 20 - The GitHub Blog : @zero pushed 1 commit to master: b343d6132c: doc/book: fix broken paths for code wtf the book github action fails with: error[E0275]: overflow evaluating the requirement `for<'v> &'v Simd<_, _>: Add` = help: consider increasing the recursion limit by adding a `#![recursion_limit = "256"]` attribute to your crate (`halo2_gadgets`) same error on the CI test Latest rust nightly is broken iirc ah ok ok i updated book manually fyi, https://darkrenaissance.github.io/darkfi/zkas/writing-zk-proofs.html#viewing-the-zk-circuit-layout Sweet : @draoi pushed 8 commits to master: 7e95c56d61: session: fix bitflags on mod.rs : @draoi pushed 8 commits to master: a4c9ddcfed: outbound_session: make SlotPreference enum : @draoi pushed 8 commits to master: 3998b80d8f: p2p: delete unused peer_discovery_running variable : @draoi pushed 8 commits to master: 06c64b06aa: doc: upgrade some NOTE comments to docstrings : @draoi pushed 8 commits to master: 2b46488e09: doc: add missing docstrings to net/outbound_session : @draoi pushed 8 commits to master: 38dcac45b5: p2p: remove more unused artifacts : @draoi pushed 8 commits to master: 5b02abf032: lilith: move whitelist_refinery() into session::refine_sesssion()... : @draoi pushed 8 commits to master: 2055820a9a: hosts: fix visibility... B1-66ER: that's all your feedback minus addressing all the TODOs in net which i still need to do Nice happy Eid to everyone celebrating \o/ Happy Eid likewise : Eid Mubarak : error[E0275]: overflow evaluating the requirement `for<'v> &'v Simd<_, _>: Add` ... help ... recursion limit, see https://pastenym.ch/#/2MoDkFq8&key=c95ae0533f9e9251e77fe6d8e492dd9d : Title: Pastenym arm0408.509d9bf0: use nightly-2024-04-05 : but `make test && make BINS="...all..."` works just fine with older nightly-2024-02-04 @ 2055820a9_2024-04-10 I guess arm0410.2055820a doesn't follow this chat, hence reporting old news : arm0408.509d9bf0: nightly is broken, we are aware, just use nightly-2024-04-05 :D : @parazyd pushed 1 commit to master: 6a0e5b1311: drk/deploy: Implement contract deployment transaction builder : @draoi pushed 2 commits to master: 7ad0792976: net: make self_handshake_interval a configurable Setting : @draoi pushed 2 commits to master: 61e51b33d0: lilith+net: move whitelist_refinery back into lilith + create new public functions... brawndo/ haumea- what's the status of this TODO? I don't see anything else about setting a remote_node_id in /net. Should it be removed or is this something that needs to be implemented? darkfi/src/net/protocol/protocol_version.rs:188 draoi: that's a debugging feature and the TODO is valid you see ChannelInfo it's something you set inside the config B1-66ER: you here? understand things better now, thanks 1. are .zk files edited manually? 2. each of the EcFixedPoint values need to be changed I suppose? 1. yes they are written by people 2. not just EcFixedPoint, all of the constants can be changed 3. when you say "arbitrary values", can they really be anything? e.g. constant "Burn" {EcFixedPointShort ("0x1234...","0x4321..."), EcFixedPoint "0x2222...", EcFixedPointBase [1,2,3],}? Are they always tuples? 4. Related to 3: should there be any validation in the parser? not tuples, an (x, y) in hexadecimal form 0x... for now, the parser doesn't need to check them, as long as they 32 byte hex values, that's all that matters cool gotcha thanks ++ basically you guys wrote your own compiler, and it seems much simpler than the awful solidity? bra*ndo wrote it very impressive yeah they did well is that the zkas compiler here? https://darkrenaissance.github.io/darkfi/zkas/index.html yep that's awesome ++ quick question - any recommendations for how to get started playing around? In particular interested in contributing to cryptography or zk but looking to understand other aspects as well Hi There's a lot of docs in the wiki/book: https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html I'd recommend going through it I've gone through various parts of the book, let me keep going through then Feel free to ask things here The contrib page has stuff on what can be done :) But generally mondays' dev meetings are good time to be on the chat Thank you very much, was just trying (and failing) to get weechat to go through tor. I'll get there Will try to join the next dev meeting, or maybe will have to be the one after do others here generally use tor or nym for weechat? or happy enoough with out the box configs? If you want to use the darkfi IRC over Tor, then you should use some special settings its config I think you should also be able to `torsocks ./ircd` but don't take my word for it, if rust doesn't use the libc's connect() then it won't work For other IRC networks in weechat over Tor, there's SOCKS5 support, should be an easy search for a howto darkirc has native tor/nym support nym's broken They want you to run an instance of nym-client daemon *per connection* gm gm damn per connection, that's impractical https://github.com/nymtech/nym/issues/3610 Never happened thx i didn't understand properly before wtf such lazy responses 10 months lol !topic wallet impl Added topic: wallet impl (by brawndo) : @parazyd pushed 1 commit to master: aa8fb77538: drk/deploy: Add contract lock tx upgrayedd: block scanning in drk is incorrect upgrayedd: It scans out of order It does `for contract in block.txs { scan_txs_for_contract }` instead of `for txs in block.txs { scan_tx_for_contracts }` brawndo: let me check brawndo: are you sure? I don't see that, whats the line? https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/drk/src/rpc.rs#L247-L265 https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/drk/src/rpc.rs#L195-L200 https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/drk/src/rpc.rs#L176-L178 So within a block you'll first apply all the Money txs, and then the DAO txs - which is out of order oh that's what you mean, yeah thats wrong : @skoupidi pushed 2 commits to master: d1c11b7cf6: drk: properly scan blocks based on tx call order : @skoupidi pushed 2 commits to master: fa08a4aeae: drk: use (de)serialize_async everywhere are the recordings of the seminars at https://darkrenaissance.github.io/darkfi/dev/seminars.html available somewhere? that's an issue with IPFS I guess, things can quickly get lost, the one linked in the book can't be found https://ipfs.io/ipfs/QmQoe1LmfL1ubuML4LQy6jdeDjPKEGD7Z2Dk9kEevqBDtw are even those seminars still relevant? Not really. They should be taken off ah ok anyone with a suggestion for why if I put `use darkfi_sdk::hex::decode_hex;` in a zkas file, `make zkas` complains: ^^^^^^^^^^ use of undeclared crate or module `darkfi_sdk` while rust-analyzer in my editor doesn't complain? however, I could just assume ascii strings here, right? I am only interested in their lengths here, so to check if "0x1234..." correspond to 32 byte hex values, I can just check the string is 64 long (without the "0x"), right? for now left at the above simple len check, however then we loose validation for actual hex values in the string terence: Hey, I could help out with such simple doc updates, however, I have one such PR open since ages, so not sure if opening PRs to github is the right way? https://github.com/darkrenaissance/darkfi/pull/256 loopr: zkas should have no external dependencies, so implement the hex decoding from scratch holisticode: We prefer stuff over codeberg, but that can be merged holisticode: However it seems it was already merged just github didn't pick it up https://github.com/1fishe2fishe/EXODUS Woah With networking even : @draoi pushed 9 commits to master: 740a539bfd: channel+protocol_version: add Option into Channel... : @draoi pushed 9 commits to master: 92cbca047e: outbound_session: just do seed next time if peer discovery waiting for addrs times out : @draoi pushed 9 commits to master: c0f9920d0a: doc: remove deceptive comment on session/mod.rs : @draoi pushed 9 commits to master: 9aea678125: outbound_session: replace TODO with NOTE : @draoi pushed 9 commits to master: e3c69b0fcc: chore: delete unused code comment from message_subscriber.rs : @draoi pushed 9 commits to master: efe2d57c4e: net-test: add assert : @draoi pushed 9 commits to master: 1dcd31ee3a: chore: update hosts.rs TODOs : @draoi pushed 9 commits to master: 72c22d0124: doc: change protocol_version TODO to NOTE : @draoi pushed 9 commits to master: 6c0b1faf98: net: add manual session to p2p integration test !list Topics: 1. wallet impl (by brawndo) https://agorism.dev/uploads/0001-doc-add-detailed-proj-overview.patch https://agorism.dev/uploads/0001-doc-add-detailed-proj-overview.patch Can someone apply this patch for me? And update book manually if need be? yeah github actions still failing due to rust nightly issues 1. apply patch and git push to codeberg 2. cd doc && mdbook build && cp book/start-here.html /tmp/ 3. cd ../ && git fetch github && git checkout -b gh-pages github/gh-pages 4. cp /tmp/start-here.html . 5. git commit -a && git push github gh-pages git checkout master, and have a good day : @draoi pushed 1 commit to master: d3c1000093: doc: apply Detailed Overview patch to start-here.md Ty draoi i don't have github setup locally, so haven't manually applied looking into Add github remote, do git fetch and then checkout gh-pages branch from github Change, commit, and push to github Ty no need to just change the workflow files do the reverse of c9e2cc0a42d917f1fd80039895b3cc0195aa591d using -2024-04-05 actually wait I'll do it ty : @skoupidi pushed 1 commit to master: ee6f54c99e: chore: use specific working nightly version draoi: done, just keep an eye out to see if pipeline pass http://0x0.st/X-9p.txt kela: use nightly-2024-04-05, as per https://codeberg.org/darkrenaissance/darkfi#living-on-the-cutting-edge thx hanje-san or whoever is around, can I get feedback on this implementation for zkas/parser.rs using TryFrom for VarType: https://pastebin.com/qd8RpNAb !list Topics: 1. wallet impl (by brawndo) brawndo: gotcha B1-66ET: https://privatebin.net/?979bbff70aa96ba6#35Q29n7RUDsFcTVusFkAvGFnJgSyUKoSgWzkJwQbzkYM there are a couple of different combinations possible between old a new constant formats which I could not deduct from the info so far or maybe even more / different ones which one is desired? maybe brawndo knows as well? where's everyone at? I'll be afk, going to the beach today btw is there any merit/interest in updating this PR? https://github.com/darkrenaissance/darkfi/pull/164 upgrayedd: ^ !list Topics: 1. wallet impl (by brawndo) yo upgrayedd ping me when around draoi: yo holisticode: in its current form its not needed, as the stuff tested are already in another test, so failing will be caught. In any case I understand why a function specific test is handy, although I dissagree with having "duplicates" for every little thing brawndo: whats your thoughts? hey so i'm tryna cleanup all the TODOs in /net src/net/protocol/protocol_address.rs:142 can you explain about what this check is tryna prevent? we want to avoid ppl asking for excessively long transports? why would it error out? the max number of currently supported transports + all possible combinations (mixing) is 10 ofc this changes depending on what transports we have configured, so we could calculate the exact max given our config info, but i'm tryna understand why we're doing this yeah this check is to prevent malicious responders, so for example the send an address with random transports that we don't support lets say for example http so when we ask for the addrs, we expect them all responds to be for our specific transports, nothing else s,responders,requesters but doesn't the fetch logic in subsequent lines address that? yeah its more of a catch early sus requests so we don't need to trigger rest logic you see src/net/message.rs:68 GetAddrsMessage uses transports: Vec, which is 1. not bounded, 2. can contain random strings gotcha but so to resolve this TODO i need to filter through what transports we accept and calculate the max number? i.e. if accepted_transports contains tor && transport_mixing, max number =... or is it sufficient to say > 10, which is max number of all currently supported transports + mixing the proper way is to have like a constant vec of all the combos so the request.transports.len() <= TRANSPORT_COMBOS.len() and for transport in request.transports { assert!(TRANSPORT_COMBOS.contain(transport)} (don't assert, but you get the point) that way if/when we add new transports and their combos, we simply add them to the constant vec ok easy noice you got it! yep all good tnx draoi: as a general rule of thumb, when you have vectors in net shared stuff they have to be bounded/checked that should also be true for the applications for example in darkfid, when we sync/receive blocks, the vector must be bounded to a max size, to catch sus nodes that are tryingto flood us ++ anyone has peers i can share to someone trying to access ircd? as seeds are down afaik Perhaps try "tls://acab.accesscam.org:25561" brawndo: looks like zkas compiles constants with type but vm strips the type off? https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/zk/vm.rs#L292 hanje: I have a doc .md for the anon credential tutorial :D, what platfrom do you recommend to share? hackmd.io? gm gm brawndo How's it going? getting some work done for mickey mouse so hoping through meets, then gonna study about some EC stuff gm : gm test test back gm hows it going and also gm still trying to wake up, updating my miniquad app https://github.com/narodnik/fagman what does it do? cmon lazy boi im sure you can see the code lol wow the name really comes together B1-66ER: XD i didn't want to. gimme a min so rendering character and "show king" fair enough (ngl I spent 5 mins playing with it after running it xd) gm 4gm B1-66ER: Around? hey yeah Hey Doing more thinking on the WASM plugins for contracts Now at the db/storage design part Wondering, perhaps it's best that each wasm module maintains its own storage (and the logic) ? We currently have the .sql files, but I think the WASM in fact should hold this logic yes and also an external storage for comms between modules We'd have to resort to sled or something else embedded though but ideally these are specified in terms of properties and public methods This means for example, the Money module would be in charge of maintaining the keypairs https://en.wikipedia.org/wiki/D-Bus#/media/File:D-Feet.png They would be specified through the API, I think each contract can specify their own And in case another contract needs it, they can simply have them as a Rust dependency did you ever see how XUL was designed? or blender plugins? you have a giant property tree, everything is in terms of this tree We'd have to design a standard for it then other components can choose to subscribe to changes to these properties I'm not sure I can do that at this point, since there is not enough data on how things will be used yeah we won't get it right first time around, and should look at other examples altho don't feel any pressure because most of these systems just started with a rough model and tweaked it until they got there Yeah https://www.xul.fr/tutorial/ But generally you agree with the storage idea/plan? There'd be a sled DB and the contracts would have their own trees for storing stuff you mentioned SQL too, how would that work? altho having sled too is handy since you can build DBs that don't have to be contract side No I was saying we currently have SQL, but we would not be using it anymore ah ic what about using SQL users? Not doable in sqlite, you need an SQL server for that the FAQ says you can do a seperate DB per user anyway up to you, you can add sled, and add SQLite later too if you want It's based on UNIX users there filesystem permissions (Talking about sqlite) maybe we want a virtual filesystem nah and then you can instantiate sled or sqlite with it on the File objects or you can do whatever you want inside of it (like create or move files) You can't have a generic interface between sqlite and sled You will always end up with k/v So it's better to just design for sled i mean with a VFS, you have your app local storage, and when opening a sled DB, it gets translated internally you should probably make a separate sled DB per wasm since each wasm runs in its own thread i think opening trees on the same DB won't be performant anyway since sled DBs use a single thread in the background sled is filesystem-based it's fine Thread-safe It's a lot better than sqlite in this case actually i don't mean thread-safe, but performance wise and also resiliency perf is great sled runs in a single thread if you create trees from a global sled DB, then all those trees run on the same thread sled doesn't "run" but each wasm should ideally be isolated in case it crashes It does IO ops via kernel sled maintains its own single thread in the background iirc i mean the maintenance process Why does it matter? because then it means a wasm plugin can crash the entire daemon How? each wasm is sandboxed in its own threads, if it starts going crazy, the other wasm threads + daemon still operate fine. we can kill the wasm process and everything associated with it dies. That doesn't answer my question well the sled DB creates a subset of threads that handle IO ops. when you call sled, it actually just queues ops for the backend. if the wasm plugin misbehaves, then it could overload the queue congestion depends how strongly you want to isolate the processes, maybe you decide to allow it This is an issue that any plugin can do regardless of threads or whatever a single queue will get saturated, whereas multiple queues will not get saturated using a single DB is just one queue and a bottleneck, but having a DB per plugin means each DB instance is sandboxed we could send the sled devs $100 and ask their opinion check this btw https://github.com/komora-io/marble > Each call to Marble::write_batch ... It is blocking. ik seems like writes are fully sequential across the db? so this is called by the background write thread after the cache is filled since you have the SQL defns and its better for wallet stuff, why not add SQL support first? might be easier, and just do separate instances per plugin Will consider how will it work with UI btw? will wallet also load the same WASM? It's gonna be a lib that any UI can use you know the way Qt (and other UI libs) have Button, which has a draw method containing the sequence of actions to draw the button i was wondering why the button isn't more like an object containing subobjects like rectangle, line, textlabel, .etc idk if that is too crazy then you can easily customize button by modifying its child objects mad how firefox XUL was a generic application dev platform, and they wrote apps in that like thunderbird https://udn.realityripple.com/docs/Archive/Mozilla/XUL/Tutorial/XPCOM_Interfaces https://udn.realityripple.com/docs/Archive/Mozilla/XUL/Tutorial/XPCOM_Examples : @skoupidi pushed 1 commit to master: 40739693a1: darkfid: persist sync headers in a sled tree + some minor beautifications the bad part of XUL was X : @parazyd pushed 1 commit to master: bdb24d3078: tau: Rename main.py to tau amir what were u gonna say abt the net code at ethdam before u got cut off. been looking at it myself : gm gm gm : @draoi pushed 1 commit to master: 88f4d67b47: doc: upgrade arch/p2p-network.md B1-66ER: what should i study to better understand the net scoring subsystem idea? seems to be taking some concepts from https://qed.usc.edu/papers/ChowGM08.pdf this is the bug you mentioned right: "Since the reader in messages.rs preallocs buffers, there should be a hard limit here" hi yes correct for scoring, i don't have any good refs https://docs.libp2p.io/concepts/security/dos-mitigation/ Title: DoS Mitigation - libp2p https://github.com/libp2p/go-libp2p/tree/master/p2p/host/resource-manager Title: go-libp2p/p2p/host/resource-manager at master · libp2p/go-libp2p · GitHub we want a "resource manager" so if you look at the wasm runtime or eth smart contract engine, there's the concept of "gas" it's like that but for p2p if you cross the watermark, then you get banned it's not actually gas tho cos not DRK denominated right yes it's not gas, it's just 'metering' for the duration of that connection ++ https://github.com/ipfs/kubo/blob/master/docs/libp2p-resource-management.md Title: kubo/docs/libp2p-resource-management.md at master · ipfs/kubo · GitHub my idea is this: the resource manager is configurable by applications, but we provide a default one with sane settings this default just works on the message types, and you can add custom messages for scoring too https://github.com/libp2p/go-libp2p/tree/master/p2p/host/resource-manager#readme Title: go-libp2p/p2p/host/resource-manager at master · libp2p/go-libp2p · GitHub > resource usage accounting aka scoring connections actually we don't need to make it configurable, just having something scoring messages (which you can add custom ones) ok will read thru the provided, tnx the idea is there are actions and params: RECV_MESSAGE with params (message_type, message_size), SEND_MESSAGE (same as RECV), ... receiving a message consumes some resources depending on its size so there's a base calc just for message_size but then also there's additional work (set by the application) on top of that, depending on what action the message triggers i think it can be quite simple and do the job but needs a little thought into its design ++ will reflect and get back with Qs the other thing is darkfi/src/net/message.rs:137 both String and Vec aren't checking the length of their data before they start reading ++ so for example if the data i'm sending you is 1 trillion bytes yeah needs to be bounded right you shouldn't just keep reading all 1 trillion bytes and *then* ban the host idk what an acceptable bound looks like tho ideally it should stop reading immediately so maybe AsyncDecodable in serial/src/async_lib.rs needs another trait AsyncDecodableBounded or AsyncDecodableSafe decode_sync(..., limit: usize) those are the 2 things for hardening p2p layer cool tnx B1-66ER: The decoding is still not entirely safe though It's easy to encode for example a Vec> with the inner one being a huge size. This wouldn't be checked by the outer packet decoder There's a lot of places you can fake the size and make the serial lib do a big allocation, possibly getting them OOM true so _safe name is a misnomer, limit or bounded is better actually draoi, i have an idea that could work without changing serial lib You need this: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.try_reserve Title: Vec in std::vec - Rust But still not thread-safe I think draoi: .decode_async() uses AsyncRead, and we pass in a stream which is AsyncRead we should make a wrapper iterator with AsyncRead that implements bounding on the reader let stream = BoundedStream(stream, limit); let command = String::decode_async(stream).await?; like that draoi: about the message dispatchers doing decoding and scoring subsystem, the easiest change might be just to pollute message subsystem with scoring subsystem altho it's non ideal however if we wanted to do things correctly, then read_packet could be changed/optimized to immediately deserialize the message from the stream. So we would 1. use .peek() to read the size of the payload (before reading), 2. call the appropriate dispatcher which does M::decode_async(stream) directly (instead of reading the Vec then deserializing it to M) so i guess this would just involve moving code around, or generalizing message_subsystem from being purely a dispatcher subsystem, to actually taking in the streams directly (rather than packet data), handling calling the scoring subsystem .etc we should also modify .send_packet() so it calls .encode_async() on messages directly rather than using this intermediate packet type. it would be more efficient and cleaner code note we then need to add AsyncEncodable/Decodable to our message types (and maybe remove non-async Encodable/Decodable) Can't remove non-async serialisation because wasm We have to support both scoring subsystem should have a table consisting of enum ScoringAction type where it is ReadPacket(multiplier), SendPacket(multiplier), RecvMessage(lambda &message), SendMessage(lambda &msg) .etc Only perhaps if it's possible to somehow rewrite the non-async functions to support both async and non-async at the same time But I'm not sure Rust can do that so applications can customize the table oops I meant RecvMessage(command, lambda &message) (same for Send) brawndo: ok np but we're talking about net messages like Ping, why does wasm need those? >(and maybe remove non-async Encodable/Decodable) I was refering to this i meant from the net messages not remove it entirely ;) ah well the derive macro just does both im not _that_ crazy We'd need separate derive macros Which essentially just duplicates the code dont worry sync serialization is a must imo https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/serial/derive/src/lib.rs#L88-L92 Title: darkfi/src/serial/derive/src/lib.rs at master - darkrenaissance/darkfi - Codeberg.org ok so SerialEncodable also generates AsyncEncodable/Decodable too? nice https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/serial/derive-internal/src Title: darkfi/src/serial/derive-internal/src at master - darkrenaissance/darkfi - Codeberg.org Yeah it does both and is feature-guarded ah brilliant i find these macro tokenizers a little scary lol Only on the first skim :) They are sensible actually https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/serial/derive-internal/src/sync_derive.rs#L157-L177 Title: darkfi/src/serial/derive-internal/src/sync_derive.rs at master - darkrenaissance/darkfi - Codeberg.org Everything inside quote!{} is just normal code that you turn into a macro #foo is stuff you can build still impressive B1-66ER: did i correctly understand that you're saying to move the notify() call inside of message::read_packet and have it call M::decode_async(stream) directly rather than the current deserialization steps (packet -> message -> dispatchers)? just talking purely about the architectural change now, not the other hardening tasks : @parazyd pushed 1 commit to master: dd3f0583ce: net/tests: Fix license header yep hello fellow packet sniffers :D hello there hey user check this summary here https://darkrenaissance.github.io/darkfi/start-here.html#detailed-overview Title: Start Here - The DarkFi Book contains all the detailed info https://notgull.net/expect-smol-2/ Title: What to expect from smol 2.0 – notgull – The world's number one source of notgull stjepang left smol so its under new mgmt now It's been out for a while now but there's still some bugs to fix before we can update : @skoupidi pushed 1 commit to master: 12efdd87f3: darkfid: use proposals/consensus logic while syncing... : @dasman pushed 1 commit to master: df6a99ba9e: bin/deg2: added graph column showing a minimized plot of the event graph : ^ this currently handles two events in the same layer, but should be expanded soon : https://imgur.com/78hZLRj : Title: Imgur: The magic of the Internet Title: Imgur: The magic of the Internet : @dasman pushed 1 commit to master: bfb2c01905: bin/darkirc: use none-default ports in test script : gm draoi: right now we read bytes_size, then we read the bytes into a vec, and lastly we deserialize that vec. better is to do this: 1. read bytes_size (async_deserialize or whatever it is) 2. check bytes_size does not exceed the limit (for now just do limit > FOO, although we could just use the scoring system directly for this) 3. construct let stream = BoundedReader(stream, bytes_size); and deserialize from that. Deserialization will fail if the object we're deserializing is bigger than bytes_size, so then we handle that and ban the channel the BoundedReader acts as though the iterator for reading has finished once we read bytes_size check if there's anything in the rust stdlib for this already, if not maybe there's a lib implementing this, then you can copy the code ok and this is all happening in the message subsystem via the channel (hence channel.ban()), and read_packet etc methods will not exist any more ah yes true, we need the dispatchers to take the stream and handle deserializing from it you can implement this already for String where we read the command ok this BoundedReader is a system/ or util/ type thing https://users.rust-lang.org/t/implement-stream-with-asyncread-trait-bounds/79377 Title: Implement Stream with AsyncRead trait bounds - help - The Rust Programming Language Forum https://github.com/jbr/async-read-length-limit Title: GitHub - jbr/async-read-length-limit: async read length limit it should be easier since we are specializing it for the smol stream type and exporting AsyncRead trait, but this lib for example implements it (search around for others) ++ https://docs.rs/tokio/latest/tokio/io/trait.AsyncReadExt.html#method.take Title: AsyncReadExt in tokio::io - Rust nice ah yeah we need the generic version since we have a trait type PtStream for all the transports it's not just smol stream anymore check system/condvar.rs all the .read() methods are from AsyncReadExt which is just a wrapper for the basic AsyncRead in futures::io https://docs.rs/futures/latest/futures/io/trait.AsyncReadExt.html Title: AsyncReadExt in futures::io - Rust https://docs.rs/futures/latest/futures/io/trait.AsyncRead.html# Title: AsyncRead in futures::io - Rust oh look you already have it in AsyncReadExt https://docs.rs/futures/latest/futures/io/trait.AsyncReadExt.html#method.take Title: AsyncReadExt in futures::io - Rust draoi: ^ you can just do stream.take(bytes_size) lol and pass that to decode_async :D happy days yeah so read_packet is a vec which is (var_uint, bytes), then we take bytes and deserialize that object but instead now you read the var_uint manually, apply checking to the size, then use .take(N) and deserialize on that thing same for the string ++ i'll start here, then look at messagesubsystem refactor brawndo: i remade therapy (just the rendering part) with miniquad https://github.com/narodnik/therapy Title: GitHub - narodnik/therapy you can use dbus to control it from python, so we can add zmq p2p stuff and whatever tools we want easily hello a bit confusing to have a code path src/sdk/python/src and have rust code beneath it? hanje-san: brawndo: I updated the constants PR, make test still running but should pass (ran yesterday too and only did few cosmetic changes) to be very honest: it has been quite a ride it feels those enums and structs are a bit messy and inconsistent then lacking all the domain skills in terms of the crypto used, i didn't dare to refactor more than I actually did use of advanced traits and rust features added to the confusion for a nascent rustacean... looking forward to the reviews, not sure i really got it sorted expecting also improvements and optimizations can be made make test just passed codeberg is down https://status.codeberg.org/status/codeberg Title: Codeberg Service Status huh my push crashed it B1-66ER: up again for me ah yeah why are you parsing the hex string in the zk vm? to validate it is a valid 32 byte value umm did you even test this code at all aaah hehe actually forgot to ask ctrl-f 'find_zs_and_us` 0 matche how can I write a test for this edit a .zk file, i gave you a const before well it compiles maybe this is too difficult for you? are you a web dev? no idk how you think passing a hex strings internally to represent bytes is ok at all i mean, i did edit a .zk file and compiled with zkas i don't see any .zk file in the pull req, and no unit test or anything https://codeberg.org/darkrenaissance/darkfi/src/branch/master/proof/opcodes.zk#L31 Title: darkfi/proof/opcodes.zk at master - darkrenaissance/darkfi - Codeberg.org loads of places where the consts are used https://pastecode.io/s/b3wr45xa Title: Untitled (b3wr45xa) - PasteCode.io why are they quoted strings? used this and manually tested with zkas EcFixedPointBase MY_OTHER_CONSTANT = ( "0x07f444550fa409bb4f66235bea8d2048406ed745ee90802f0ec3c668883c5a91" , "0x24136777af26628c21562cc9e46fb7c2279229f1f39281460e2f46c8a772d9ca" ), EcFixedPointBase MY_OTHER_CONSTANT = (0x07f444550fa409bb4f66235bea8d2048406ed745ee90802f0ec3c668883c5a91, 0x24136777af26628c21562cc9e46fb7c2279229f1f39281460e2f46c8a772d9ca), Self { g: nullifier_k::generator(), u: vec![], z: vec![] } there are no tests inside proof, so where are other unit tests located? this is wrong fn set_zs_and_us(&self, us: Vec<[[u8; 32]; H]>, zs: Vec) -> Self; they are never used empty this is never called anywhere so this task has two main elements 1. implement parsing and vm handling 2. setting the actual constants i only said i updated the PR, not that it is finished i said many many times you have to call find_zs_and_us() in the zcash library to set those values from G this is web dev style, not system code i haven't written any web code since 2017 but ok considering that at the same time i am also learning rust... these mistakes are not to do with rust it's completely unacceptable to pass strings around internally to represent bytes you never do that in system code *to pass hex strings aren't we parsing all strings though in this parser? yes but not in the VM to represent data that goes on the stack so if that whole fn would have be left "inline" inside the if it would have been ok? not calling a fn? no, the hex data must be in the bincode itself, not a hex string ok tbh there aren't that many unit tests in zkas, are they somewhere else? : @dasman pushed 2 commits to master: cc01c7767d: bin/deg: remove unused code : @dasman pushed 2 commits to master: 851ad0bfe2: bin/deg: use rpc.py for RPC connections Oops, this ^ also fixes 'enter' for details view, and 'b' to get back to main view : @dasman pushed 1 commit to master: 9a0d582dca: bin/deg: rename deg2 to deg : @dasman pushed 3 commits to master: 528e03302a: bin/deg: remove JsonRPC class to actually use src/rpc.py and handle its connection errors : @dasman pushed 3 commits to master: 8ec39f8368: bin/deg: add footer to details view : @dasman pushed 3 commits to master: 5906da4382: bin/deg: add parents to events details gm gm : @draoi pushed 1 commit to master: 7528147b6c: message: extract the length of the packet into a buffer, then deserialize... dasman: it looks super cool rn : test test back really cool https://agorism.dev/uploads/screenshot-1713599236.png dasman: is it possible to have the event hash displayed as well next to the datetime? i mean the first 8 chars of the event hash, not the entire thing brawndo: here? B1-66ER: thanks, or maybe hashes be instead of the footer, like in tig? i mean like this: 787993657e97b67948ad 2024-03-22 10:19 +0100 parazyd o halo2_proofs/dev/cost: Support circuit-params in measure() so that when a hash is missing we can discuss it together and find where the problem is ++ : @dasman pushed 1 commit to master: 58b14d7c9d: bin/deg: show first 10 chars of event's hash in main view hi yo omg it's immanuel kant himself gm I started to look into this PR: https://codeberg.org/darkrenaissance/darkfi/pulls/252 and got a doubt. Would there be a situation where `EcFixedPointBase` wont be `NULLIFIER_K` and maybe some user input in proof/voting.zk:7 ? Title: #252 - Implemented generalisation for generator constants access - darkrenaissance/darkfi - Codeberg.org I'd like to think that EcFixedPointBase won't really get a user input point or I missed out on something airpods69: wdym? ofc it has user input EcFixedPointBase FOO = (x, y) rn these constants are hardcoded but they should not be B1-66ER: ahh okayy I thought that NULLIFIER_K could remain like that (should've looked deeper) yes you're right NULLIFIER_K is precoded i'd also like to confirm one more thing, when constants are being set, all of the `constants` are supposed to be manually set? (like FOO=(x, y)) or can it be just any one (or x numbers) and other constants using the hardcoded values? so if the name of the constant is NULLIFIER_K, VALUE_COMMIT_R or any of the ones that currently exist, then you cannot do = (x, y) but if it's some other name then you must do = (x, y) some of those values in fixed_bases.rs might need to become enums, like impl FixedPoint for NullifierK { rn if you see darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:190 well you get the idea darkfi/src/sdk/src/crypto/constants/fixed_bases.rs:143 it needs tidying up maybe all these enums and structs can be merged into a single one it's like normally the consts are user configured but we provide some presets yes but are all constants supposed to be user configured at the same time? or can it be randomly done? like some user configured and some constants precoded Kinda like this: constant "Vote" { EcFixedPointShort VALUE_COMMIT_VALUE, EcFixedPoint MY_OWN_CONSTANT = (10, 20), EcFixedPointBase NULLIFIER_K, } EcFixedPointShort and EcFixedPointBase are precoded but EcFixedPoint isnt? (Idk I could be missing out on something here) Actually idk, I'll get the parser to work first so that the constants reach bincode, could figure this out once that's done yes correct airpods69: that is correct, we already have precoded constants working so we don't want to break the existing code, but we want to enable configuring them Perfect okayy, gonna eat some noodles and then get back to this again. afk your intuition about getting the parser working first to generate the correct binary is correct back, thanks, starting to work on it. gm Had unstable electricity this weekend Ordered another UPS lol brawndo: welcome back to the matrix yooo brawndo: did you see my gift? https://github.com/narodnik/therapy Title: GitHub - narodnik/therapy ooh excellent I'll try it out :) its so cool i was waiting all weekend for you to see we could replace dbus with zmq, i just wanted to try the zbus lib and dbus to mess around, but for p2p we can use python-zmq with json msgs easily Yeah I don't really like dbus Also it wouldn't be available on Android and stuff If it ever comes to that zmq can probably be bundled somehow i couldnt find a good rust zmq lib Wasn't there one made from C bindings? yeah thats the default also what zmq pattern do i use for multiple request-reply conns? isnt normal REQ socket just for a single connection? What was the one we used in therapy before? It seemed to work That one was with a centralised server i mean the renderer here, not the p2p comms And it multiplexed the data to everyone you see the python scripts communicate locally with the canvas so for example in therapy/pencil_libinput.py:19 we're drawing a crosshair for the wacom cursor (when not drawing) I can't see that file oh ah nvm it's in the root I was looking at subdirs ah yeah so there's a separation between p2p network traffic, and the local canvas which is operated by the python scripts could also be done with jsonrpc Yeah the dbus stuff is so heavyweight, it starts its own async core, and has so much stuff but busctl tool is quite nice, you can introspect everything anyway just try, its really fun, i made it in a fit of angry passion in 1-2 days Will do :) :D aggresive code writting is always good :D splitting the render canvas from the logic is such a good idea let the hate flow into bytes the render canvas is fast and minimal then the logic you can make using python and is completely customizable so you can edit and script everything the canvas just has layers, a camera to move around, draw line .etc (see pytherapy/api.py) you select tools by running different python scripts Nice just getting started with miniquad gfx stuff is v fun the wallet needs templeOS on screen sprites and bouncing balls 10/10 would use XD :D comment from hackernews thread about terry davis: "any competent programmer if given 10+ years could do what terry davis did" <- lmao what did he do in 10 years #goals shieet someone said to picasso "I could have done that!!" and he replied "why haven't you?" haha >Not implemented for X11 >mfw oh really? let me test, it should work fine... linux_backend: miniquad::conf::LinuxBackend::WaylandWithX11Fallback, maybe the wayland fallback for x11 is buggy (i have it this way cos the x11 fallback for wayland is buggy) It's just a log line that the rust program wrote weird im booting my x laptop to test Now looked deeper, zmq would be nicer than dbus tbh Alternatively could also just be a simple socket https://github.com/mikelodder7/android-building/tree/master/x86/zeromq Title: android-building/x86/zeromq at master · mikelodder7/android-building · GitHub ok great i can put rust-zmq np brawndo: it runs on X np for me does fagman work for you? since it's essentially the same code Yeah it does is your computer maybe single threaded? https://parazyd.org/pub/tmp/screenshots/screenshot00377.png it works fine It's just something internal to miniquad likely wdym, it looks fine here It spits out that message when I start it I never said it doesn't work lol which message? Not implemented for X11 ohh weird idk what that is, maybe a miniquad thing i don't see the message in your screenshot i thought you were saying it doesn't open for you ah sry no gotcha, but nice works as planned haha Failed to connect to Wayland display. Failed to initialize through wayland! Trying X11 instead 13:23:20 [DEBUG] (1) therapy: draw_line(origin, -0.1, 0, 0.1, 0, 0.001, 1, 0, 0, 1) 13:23:20 [DEBUG] (1) therapy: draw_line(origin, 0, 0.1, 0, -0.1, 0.001, 1, 0, 0, 1) Not implemented for X11 ah yeah that's normal last line is not normal, idk what it is Yeah dunno i see it too, will look later into it you should run ./keyb_nav.py too miniquad/src/native/linux_x11.rs:329 Only place where that message is :) Something to do with kb oh maybe it's for virtual kb or something? ah true cos i'm doing show keyboard which is for android ok yeah that's probably it yes good find fixed, git pull That's it :) the app is proper suckless philosophy the tools are hackable Yeah The dbus stuff might be a tad too much :D I like the idea of bundling zmq statically, or just using a unix socket Then you make it actually portable zmq maybe even more than a socket, since windows (altho who uses windows) Would be fun to try 9p It's also kinda designed for this usecase i did actually think about raw sockets but zmq is built for this kind of stuff, esp on the p2p layer https://man.cat-v.org/plan_9/5/intro Title: intro page from Section 5 of the plan 9 manual nice, kinda like a rest api https://zguide.zeromq.org/docs/chapter3/#The-Asynchronous-Client-Server-Pattern Title: 3. Advanced Request-Reply Patterns | ØMQ - The Guide In plan9 you have the WM called rio It exports a control interface under /srv ah yeah i need a ROUTER for the server, then clients can just use REQ sockets https://stackoverflow.com/a/29502330 Title: zeromq - ZMQ: Multiple request/reply-pairs - Stack Overflow So you can do `echo new -dx 800 -dy 600 > /dev/wctl` to open a terminal woah nice, i guess that's what dbus wants to be Yes I think that's what we probably did in the first version (re: zmq) the first version didn't have a separate render canvas zmq proxy i mean the dbus part, replacing it with a zmq socket (rather than jsonrpc) yeah but that was between hosts sharing traffic ah you want actual p2p instead of distributed? i think we should go full p2p lol, and just host proxies for ipv4 plebs https://gitlab.com/drummyfish/small3dlib Title: Miloslav Číž / small3dlib · GitLab so if you use ipv4, you run a special script which uses a proxy Yeah actually original inspiration for this, was a kind of livecoding env where we could quickly prototype the wallet realtime so placing objs on screen and then porting it to rust heh cool gm !list Topics: 1. Header::height should be u32 (by hanje-zoe) 2. add call_idx to env => remove from process() ix (by hanje-zoe) gm gm gm gm, huh these are old topics !clear !deltopic 1 Removed topic 1 !deltopic 2 No topics !list Topics: 1. add call_idx to env => remove from process() ix (by hanje-zoe) !deltopic 1 Removed topic 1 My computer where the bot is crashed So perhaps it was the old pickle B1-66ER: https://github.com/zeromq/pyzmq/issues/1646 Title: ROUTER-ROUTER communication example · Issue #1646 · zeromq/pyzmq · GitHub This is likely what you want yes ty afk offline (dr) hey all! Messaging from Sweden, i'll finally be able to make it to a meeting lol also, manage to get this all working on my macbook using Multipass (kinda like WSL), I know nobody likes macbook's here and I'll admit I feel slightly ashamed but there's a way to get it working I can compile the change I've been trying to implement now too, so I can try to finish it https://miro.medium.com/v2/resize:fit:687/1*urkG1quJnJIApIMyTQIe1A.jpeg lmao literally me, having said all that, it's cold as hell where I'm at, -5C in the mornings damn its cold there wow, I freeze at 2C here yeah my hands are dry and bleeding already !topic stale tx handling Added topic: stale tx handling (by upgrayedd) gm gm b o/ yo B1-66ER arround? : @skoupidi pushed 1 commit to master: 39bfc94d39: darkfid: sync cleanup !list Topics: 1. stale tx handling (by upgrayedd) : @dasman pushed 1 commit to master: f270278588: bin/deg: fix identation issue for larger layer numbers : ^ indentation* yo !topic sync checkpoints Added topic: sync checkpoints (by upgrayedd) holla heyo heyy \o hi upgrayedd: hey yeah Hi B1-66ER: we can discuss in the topic : @draoi pushed 1 commit to master: 7085ac34b1: doc: add libp2p resource manager notes to arch/p2p-network.md !start Meeting started Topics: 1. stale tx handling (by upgrayedd) 2. sync checkpoints (by upgrayedd) Current topic: stale tx handling (by upgrayedd) ok should we start? sure ok so the context is: user spends a coin and creates tx A using that coin, then unspends coin and creates tx B using same coin, then broadcast these two txs to the network when nodes pick them up, they are both valid, therefore enter the mempool what do you mean "unspends coin"? speaking old drk wallet terms, tldr: just reuses same coin ic so we need to define how to handle when one of those two becomes invalid, aka stale 1. when we create the new block, we grab all upproposed txs of the fork from the mempool lets say this sequence will be: [txA, txB] tx verifies so it enters the block, txB will not so it will get skipped for example should we immediatelly remove it from specific fork pool? I guess yeah yeah seems to be correct since it fails to verify A node that isn't actively mining probably shouldn't even maintain its own mempool but just relay things around then we need some mechanism to cleanup pending_txs which are not referenced from active forks A node that is mining can do garbage collection whenever it tries to construct a block doesn't relaying txs make the p2p vulnerable to spam? I guess an async task at fixed intervals can do that, aka grab all pending txs, see which are nor referenced into forks mempools, and simply drop/remove them !topic tutorial document peer review Added topic: tutorial document peer review (by ash) Dunno brawndo: well the point of keeping txs is for example we can implement a rebroadcast mechanism !topic philosophy meeting Added topic: philosophy meeting (by ash) We have a lot of those bg tasks tbh lets say my node was not connected to any other nodes when I broadcasted so network is not aware of it we could make an rpc call to rebroadcast our pending txs Yeah so how do you think we should tackle this? if not with a background task? The garbage collection should probably be happening whenever something relevant is happening e.g. building a fork/block aha then we can do it after finalization, were we cleanup/reset the forks too is that garbage collection for miners or relayers? everyone does it, since they all keep current fork states ok Yep What about the tx ordering? ok then 1. remove erroneous from forks mempool when building block 2. on forks reset, remove unrefernced pending txs from global mempool I think the correct way is to always go oldest-to-newest about rebroadcasting, you can actually check if a tx was propagated or not we are doing fifo now have you heard about tx radar? upgrayedd: ACK B1-66ER: yeah, but that can come later, not trivial right now all good here? move to next? yep it sounds all rational, and broadcaster or not is fine either way yeah we can tackle this at future time, a simple rpc rebroadcast call should be enough !next Elapsed time: 13.2 min Current topic: sync checkpoints (by upgrayedd) ok so we want to add a checkpoints functionality in syncing, where you configure known checkpoints and node will first sync until that, then go to "uncharted waters", aka actual tips B1-66ER: the question is: should we use a checkpoints array, or just a single checkpoint? array since the array would be a sequence of hashes, using just the last one ain't enough? two arrays can't produce the same last do you sync by headers first then verifying txs? B1-66ER: Can you maybe explain how checkpoints in libbitcoin work? or is it just verifying each block one by one? yeah, we grab headers backwards, and then grab the blocks going forward ah then maybe a single value works fine the array is when you do one by one because you want to fail earlier rather than later but headers sync is fast right now the logic is: ask peers for their tip, grab the most common and highest one grab the headers from tip until your last known, going backwards once all headers are received, do a quick and dirty verification of the sequence [last, tip] if that passes, start grabbing the corresponding blocks going forward, and apply them this is without the checkpoint with a checkpoint, we introduce a step before this process, where we ask peers not for tip, but dirrectly for the headers [..checkpoint.height] cool so if the sequence.last() == checkpoint.hash we know they follow our checkpoint the logic is actually simpler because in normal (non-finality) PoW, chains can diverge and then grab the sequence [last, checkpoint] and verify it so we have to send a 'block locator' object yeah here we have a single sequence we sync forks after syncing everything canonical great it's very straightforward reorg logic can get very messy and hard to verify ok so you reckon using a single checkpoint is enough right? yeah ofc we only use multiple for speed ok another thing is: What happens in the code when the tip changes while syncing? You reach tip then sync again? https://wasmer.io/posts/py2wasm-a-python-to-wasm-compiler brawndo: yeah its a loop kewl now i can write contracts in python you keep looping til tip is not changed check bin/darkfid/src/task/sync.rs B1-66ER: kek so back to what I was going to ask: right now we ask peers for their tip, then filter out those that are not synced or on most common and highest tip with the checkpoint, should we filter after this, or introduce a middle step, like filtering out directly those who don't follow the checkpoint? do the easiest one The checkpoint is used to speed up syncing lol fair answer which is simpler, so keep current logic and just have checkpoint like an assert Meaning you skip verification of stuff up to the checkpoint brawndo: hm I wouldn't skip anything tbh use the checkpoint just to verify we are syncing the correct sequence you can skip verifying blocks before checkpoint ok sure can be done Yeah you just execute the state transitions, not care about sigs/proofs ok will do btw B1-66ER how do I drop a p2p peer from the app logic? call .stop() or .ban() for example in the scenario we can drop the peers that are not following the checkpoint To answer: Then you skip the peers who don't have the checkpoint aha ok After that, find the most common tip, and follow that brawndo: i think we can assume peers generally have the checkpoint Perhaps we still need to verify received block correctness tho no? it will simplify things yes ofc from the checkpoint I mean someone can inject erroneous blocks if we don't verify them talking blocks before checkpoint but we just make sure the checkpoint actually exists yeah and when we retrieve blocks until that checkpoint, we said to skip verifying them but can't a peer inject an erroneous block in that case? You're verifying the block headers yeah the chain must be valid but you cannot compute hashes backwards when verifying headers, we verify the sequence correctness so if prev_block_hash = hash(prev_block_header) we still have to verify the block itself afterwards then it's in the chain ^ brawndo is saying to skip the block, just make sure the hash chain is valid yeah we already do that header.previous = previous.hash() && header.height == previous.height + 1 if this passes, then the checkpoint ensures this block is in the checkpointed chain and therefore is valid and since hash contains txs merkle tree root, when receiving the block, we can just do: block.hash() == header.hash() and its enough Yeah correct ok gg just making sure we check everything With the checkpoints you get to skip all signature and zk proof verification Which likely introduces a solid speedup yy it will syncing right now became a lot slower, since we append everything as a proposal, aka going the full hardcore verification route ok these are the two last pieces for darkfid, along a general cleanup whereever its needed then it will be good to rock n roll it will be an issue for sure (sync speed) nice testnet imminent <3 looking forward to it !next Elapsed time: 21.2 min Current topic: tutorial document peer review (by ash) Hey fellows! Hi greetz I did a document of the anon credential tutorial I would like if someone can do peer review (new here) please where is the PR? No PR just a document, agreed to work on a document first then move in to the code agreed with whom? agreed with hanje, but unfourtunately I haven't found her/him hanje-san is a girls name i think hanje meant to make a PR for the book You can make an anonymous account on codeberg: https://codeberg.org/darkrenaissance/darkfi/ Then if you open a pull request, someone will review it :) Good! I will do that ty next? next ash: hit !next if that's it for this topic !next Elapsed time: 6.0 min Current topic: philosophy meeting (by ash) Well I was thinking that it would be cool to do philosophy meetings maybe once or two in a month, not too much what's the format of the meetings? maybe read a small text or a video then discusse it in turns i'm up for that sure i'll join I found a interesting text just a tentative starting point https://www.researchgate.net/publication/290120192_Free_software_philosophy_and_open_source we can do the meetings in #philosophy I'm in for that About the philosohpy of free software I'm also keen cool! Nice What do you think about the text? Or if you can recommend else? let's go with that as a first one we can propose subsequent texts at the meeting can someone link the pdf? i can't find it on sci-hub that would be great and can also propose the schedule? maybe wednesday at this same time can we do it an hour earlier maybe :> wfm maybe next week or two more? any is fine just pick one imo Next week, wednesday, an hour earlier than this meeting? ++ wfm too 1st May 14.00 UTC ++ ++ excellent! ty <3 !next Elapsed time: 28563292.3 min No further topics !stop !end Elapsed time: 0.2 min Meeting ended link the pdf plz * hi Good mtg tnx all as B1-66ER is ignoring my dms, I want to say a few words here ty all, cya Thanks all yes, a sec thanks everyone I highly respect everyone's skills, dedication, knowledge and work o/ loopr: im way to exhausted to deal with people, i'm not a manager I know Hear me out I realize my skills were not up to the task i was working on, i misjudged what it required That's fine, i can take that What i think is not fine is the way my work has been thrashed, and close to ridiculed I knew people here were no snowflakes, and heck I have been around in many projects and environments, i am no 20 years freshman from uni But the language and vibes border to outright hostility here if people are perceived to not be at the same level as others - at least this was my experience I don't think that is beneficial tonthe project, especially prospective newcomers nobody disrespect you personally I was just going to say that I know I respect and honor that too But I want to close saying that I did not fail You failed B1-66ER, because you did not do due diligence on my skills beforehand, and then were bashing my work for not fulfilling your expectations 20 years senior dev my ass Now this is personal ok well i chose to trust what you say about your skills Yes you did, and that is the point i guess trusting people is a bad quality You wanted to help someone as a friend, rather than onboarding like in a conventional way pfff we don't do calls and bullshit like that, either step up or step out. we're not here to manage soydevs I am confirming your words, not saying you should have had to I stepped in and now i guess stepping out I solely feel you can do better than bashing like that on someone's work . And disrespect someone's experience, because it doesn't match your expectations and the one you need . Still, thanks for the opportunity and good luck everyone : @skoupidi pushed 1 commit to master: ee2859554a: darkfid: optional checkpoint usage during syncing added yo yolo how's it going, evenzero? hi, been a while since i used irc actually \list anyone there? e0 yes baitin? 1) What i would like to contribute, have worked on web3/zk before but am a bit clueless here, help? e0: https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#contributing thanks upgraydd for that \nick kamacho e0: its the other slash / yeah been a while since i used irc, sry gm upgrayedd: batin is from idiocracy lol draoi: libp2p is over-engineered hot garbage so you can look at their requirements/featureset and tone it down re: resource manager good to get informed / widen the perspective, then try to come up with MVP we can always do less now, more later, esp if the arch is good such that it allows expanding to that in the future without huge destructive changes (so good to keep in mind the future when designing software now) also this software is solid: https://www.rasterbar.com/products/libtorrent/ it runs all the torrent apps, so whatever they do for DoS is good. take a look at that libp2p is made by , whereas libtorrent is real world software perfected over 2 decades libtorrent -> libbitcoin -> darkfi p2p, so our p2p stack comes from libtorrent checkout the code and dig through it: https://github.com/arvidn/libtorrent it's an interesting history lesson on pre-async net code stuff lol everything is done using callbacks rg snub src/ rg choke src/ did you read the notes i made on libp2p? not sure if you're responding to that or just making a general comment ah not yet, ty general comment in arch/p2p-network.md https://darkrenaissance.github.io/darkfi/arch/p2p-network.html#scoring-subsystem yep that's it so basically it splits the different processes into scopes that contain eachother (like node -> session -> protocol -> channel in darkfi terms) where each scope has a corresponding limit that is calculated from memory usage and other resources > Since the reader in messages.rs preallocs buffers, there should be a hard limit here, and the raw reads also affects your score too. tell me why this is wrong i didn't write that lol ah true i did but go on, tell me why well we changed the code now so we are no longer preallocating buffers correct nice so you have a model here for scoring, i'd still look at libtorrent which does snubbing/choking/..., and see which strategies you might employ apart from stopping the connection or banning the host ++ also this scoring system is very fine grained, we could probably get away with less also libp2p just gives an error when a limit is reached, and its up to other parts of the stack to decide what to do with that error for example file descriptors - we don't really have any except the peers list it was saying that socket addrs are file descriptors on most machines and it's a scarce resource that must be constrained... idk rly ah ok, yeah they are file descs that's the connection limit you don't need to meter that in scoring subsystem, you could probably just use a hard limit Anyone running darkfid? Or is testnet not online? gm i think it's coming v soon (see yesterday's meeting) the final testnet tnx : @zero pushed 1 commit to master: e73aa009fa: doc: wallet functionality : @skoupidi pushed 2 commits to master: c208ba4442: validator: better erroneous and unreferenced penging txs handling : @skoupidi pushed 2 commits to master: ad7f835d50: darkfid: pending txs garbage collection added Trying to add user specified constants to zkas, but kinda stuck with how should fill up the constant_map (parser.rs:327) cont... So, first -> Either decide at runtime which NextTupleX to use. So defining a NextTuple9 if name.token == MY_OWN_CONSTANT is present. This would probably be clear for someone working on the code about what is happening. s/Either// s/327/324 second -> keep NextTuple3 in the while statement (parser.rs:327) and check if name.token == MY_OWN_CONSTANT and then do a NextTuple5 so that constant_inner reaches the seperator comma at the end of our statement in zk file, and define reinitialize comma in parser.rs (from parser.rs:327) to this new comma from NextTuple5. This one does make sense but then while runs twice, I dont like it that much. second -> keep NextTuple3 in the while statement (parser.rs:327) and check if name.token == MY_OWN_CONSTANT and then do a NextTuple5 so that constant_inner reaches the seperator comma at the end of our statement in zk file, and define reinitialize comma in parser.rs (from parser.rs:327) to this new comma from NextTuple5. This one does make sense but then while runs twice, I don't really like it that much. third -> in #252, they check if comma.token_type == TokenType::Assign and then proceed accordingly. This also works but still gotta follow what second approach does. Does make sense but would have to make it clear why this is being done. Done. Not sure which one to go with (if first then help lol) going from second to first shouldn't be that troublesome I suppose. Gonna try second approach : @skoupidi pushed 1 commit to master: a5b9706829: darkfid: properly handle the garbage collection task Hello I am a senior software dev, but my cryptography skills stop and knowing what a hash function is and how to use it :) elloo Can you guys suggest some online course or book for improving my cryptography skills? Hi airpods69 I don't want to become a deep cryto-math-expert either myscdcdr: The learn section would be a good place to start looking for stuff to learn: https://darkrenaissance.github.io/darkfi/dev/learn.html#methodology Actually I saw that page, and that is the reason I decided to drop by here For 'Software Developer' it doesn't mention much in terms of cryptography I don't think my goal is 'Cryptography Researcher' Maybe 'Protocol Engineer' 'Algebra studies are also required but not to the same degree as cryptography.' So what would be an appropriate course of study? Hi frens Whatsup johrz mysdcdr: well, I'd take a step back from the convo and wait for those with more experience to drop in. (my 2 cents would be that even for software developer, you gotta know cryptography till some extent ofcourse to understand what you are doing. Protocol Engineer would need more than that, Cryptography Researcher ofcourse needs the most.) Cool, appreciate you wanting to help though! I'll be lurking around and see if something comes up. sure thingg gm hey mysdcdr, you want to learn crypto? most people try to study books on crypto, or more specifically zk schemes. a requirement is to know some basic abstract algebra (see pinter for a good intro) then you can try to study zk schemes starting with groth16 or plonk. after this, you can go deeper into math (starting with EC), or you can treat math as a blackbox and just study a lot of different crypto schemes to learn little tricks you can use the justin thaler zk book is decent gm philosoraptor o/ gm maybe this was shared already: https://qed.usc.edu/papers/ChowGM08.pdf it should be in the book this blog is really good https://blog.libtorrent.org/ https://www.rasterbar.com/products/libtorrent/utp.html i just used unsafe https://github.com/narodnik/therapy/commit/8dcb6dbc94ef79f2c115a5c94ebae6e6bbbb1847 : @draoi pushed 1 commit to master: ea9044216d: doc: add useful attack definition to arch/p2p-network.md... https://coredumped.dev/2022/04/11/implementing-a-safe-garbage-collector-in-rust/ q sorry hah misclick vim motions seem to be a bother sometimes xD gm gm brawndo airpods69: You'd have to modify the logic a bit there where the NextTuple3 is happening for constants Since you'll have two different ways of declaring constants, you'll somehow need to scan for either This is what I did: https://codeberg.org/airpods69/darkfi/src/commit/c0aecb26e77604d39a02c73e332188ea7c5ba39b/src/zkas/parser.rs#L331-L369 seems like it works (like atleast reads the constants, just need to figure out how to put it into parse_ast_constants function) function at parser.rs:590 ok but it'll have to be an arbitrary name, not MY_OWN_CONSTANT Ah okay so maybe check if the name is not from the available ones? Yes that, and you see each of those in 590 check that it exists as a hardcoded one So there you skip that check if it's a constant declaration, e.g.: EcFixedPoint FOO_BAR = (0x00, 0x01), I think then also the `Constant` enum should have a `coords` field that is an Option err, I meant `Constant` struct in ast.rs ;q back, sorry was afk: had to deal with internship meeting I haven't reached the part where I had to look at `Constant` enum (Was dead stuck with the part I sent you cause I couldn't decide how to proceed until yesterday). `constants_map` would need to take in `x` and `y` with the arbitrary name but `parser_ast_constants` doesn't like that. Would have to change the definition of the function too. To put the variables into `Constant` enum, gotta go through `parser_ast_constants` first ;-; Yeah you have to modify it okii shall do !topic commit log format specifier Added topic: commit log format specifier (by philosoraptor) philosoraptor: That looks like a structured plan, thanks np You would still say that is the plan for someone who doesn't want to end up being a Crypto researcher, just a solid grasp of crypto as a dev? mysdcdr, I suppose yeah even if you don't want to be the researcher (just treat the maths as blackbox like philoso said) mysdcdr: yeah but do less abstract algebra and as airpods said, just treat it like the blackbox like an API : @skoupidi pushed 1 commit to master: ca5df82a72: darkfid: apply blocks with minimal verifications when using sync checkpoint so I am able to get the values of coordinates reach compiler.rs:85, but as soon as I push the values to bincode, decode starts freaking out :/ raising a draft PR in 5 mins just to keep things visible Alrighty philosoraptor airpods69 let's see where I get to :) Thank you so I raised a draft PR last night, #253 is the PR number. (Apparently the constants also get written with the output, just gotta figure out decoder.rs now) mysdcdr: looking forward to your learnings :D gm oh this makes sense, it freaks out cause it doesn't identify '0' as a VarType (I am putting 0 as a NoneType Token for the coordinates, can be changed later) but hmm idk lets see what can be done. I am guessing this is related to how compiler.rs puts the value into bincode instead of decoder.rs itself. or not... ;-; decoder.rs:132 is where the problem starts (more like the problem that I created lol) gm o/ o/ (0, 0) is not a valid EC point. you could use that yay, that hunch was right. Thought still stuck with decoder.rs part, so close yet so far. adding 3 to iter_offset at decoder.rs:147 would make sense since we have 2 extra values in between? (it does compile the zk binary) shouldn't it be 2 * length_of_value? and the length_of_value is 32 ahh for now the coordinates are TokenType::Number and iter_offset was `iter_offset += 1` and I changed that to 3 (cause 2 new values) and things started to compile. 2 * length_of_value would probably overshoot the iter_offset? also Number because its easier to deal with it before adding the hex values. : @draoi pushed 1 commit to master: f60a1983bc: doc: add darkfi p2p resource manager notes to arch/p2p-network.md ah nvm i didn't realize you're working with tokens, i thought it meant bytes im working on the UI rn https://armor.vision/ui/1/ mysdcdr: in #math, feel free to share your study plan philosoraptor, no problemo. oh wait, whats that UI for? I thought we would just use weechat or someother irc client as client for darkirc we'll keep weechat compat, but making a UI for everyone else oh wait so no compiling? just download and run? darkfi.exe yeah lets see, it's tricky since we have the daemons like darkirc and darkfid ah fair I remember someone talking about this on the telegram chat a day or two ago. we could compile them in or keep them separate i'm not sure what's the correct approach or like you can optionally "attach" them to the UI if you want to enable payments etc idk option 1) node is compiled inside the UI, option 2) node is external and UI uses jsonrpc you can actually do both as well altho you don't want to have tons of binaries like darkirc, darkfid, taud, .etc managed by your UI, which is why me and bra*ndo discussed making something like a 'kernel' which has pluggable modules. that way there's a single binary managing all the backend daemons. you can also do in process jsonrpc, so even if the UI runs the daemon code directly, it can just use jsonrpc... or talk to an external process for example on the phone, you might want darkfid on another server since it's heavyweight but maybe run darkirc locally is fine that sounds good re: kernel darkfiOS :D apple devs hate this one OS a lot of code/mechanisms are shared by the individual binaries. for example when we add swarming support to p2p network you will want to run a single p2p overlay network shared by all apps gm philosoraptor: The UI should definitely be a separate thing that uses a protocol to communicate with software it supposed to work with You can have another "main" daemon that can handle the logic, but the UI should be an interactive thing only display/interface That "main" daemon for example could be the wallet/ircd Then any interface can interface with it using a protocol like JSONRPC For the chat, you could simply have a JSONRPC subscription which multiplexes - it would notify you about new messages, and you would send your messages through that same connection i remember using mldonkey-gui, which had the same model with mlnet daemon, where the daemon implemented kazaa, ed2k, ... a whole bunch of programs it was a nightmare to manage. many times i installed mldonkey-gui and it couldn't find the mlnet daemon, or it would be running but not able to connect .etc then also when closing the app and reopening, sometimes it would think mlnet is already running, ... lots of weird stuff like that Could just be poorly written, I dunno now i see bitcoin-qt, monero gui, qtbittorrent, instead of using jsonrpc with the daemon, they just integrate it directly as a lib idk, yeah could be poorly written Yeah I dunno what qbittorrent does I know that it has a Qt UI, and also a WebUI it should be possible to start the daemon in a special task group, and then manage the group qbittorrent is libtorrent-rasterbar + Qt UI aha I use this https://pypi.org/project/python-qbittorrent/ or will we host a darkfid daemon? like electrum or cake wallet liability i guess on mobile you will connect to a remote darkfid, but maybe do some stuff locally liability++ so it's a mix of remote/local daemon Maybe in the P2P network we can export some public RPC endpoints Because darkfid doesn't maintain a wallet So it's relatively safe to expose Ideally we don't host anything ourself Dunno if seed nodes are also facilitating usage s,also,& considered, ok good we're thinking about this, but for now i'll assume jsonrpc single endpoint mode It's always single endpoint In P2P you'd just select a random node to work with you should select multiple and cross correlate what they send you No, why? meh because they can lie about what's canonical Same goes for Electrum, yet it seems to be working yeah i'd argue that's an attack on electrum electrum should allow connecting to 3 nodes and comparing traffic It's still not a solution, it's just lowering the chances of a dishonest node lowering chances of a dishonest node is good only full safety is running your own node, but if people won't do that, then provide next tier protection commercial wallets run their own node and users trust them Yeah with community servers, then it's more risk anyway i have more idea of the direction, will keep things simple for now *nod* : echo echo back ohai : Hi : darkirc seems stable : echo echo back : hey : draoi: didn't taud recently lose some tasks? : yeah taud was not very stable last i was using it : i think that's event graph related but idk : needs someone to look into : it should be easy to fix if we have the proper tooling : then when it doesn't sync, we can simply isolate the graph and try to replay the event to see if it gets added : if it does then it's most likely a protocol issue fun fact about dbus, i normally disable it but enabled it for therapy. now it starts elogind which captures my Sleep button (which i use for other stuff) great example of bad engineering airpods69: around? I am now upgrayedd airpods69: re #253: commit messages should be usefull, as per https://darkrenaissance.github.io/darkfi/dev/dev.html#making-life-easy-for-others preferably squased into a single one another thing is that you are signing your commits, which is probably revealing your main account. I don't know if thats a problem for your, but if it is, use a repo specific signing key. : @draoi pushed 3 commits to p2p_hardening: 657e74604c: channel: add start_time to ChannelInfo : @draoi pushed 3 commits to p2p_hardening: 823b329b0c: doc: update research manager notes in arch/p2p-network.md : @draoi pushed 3 commits to p2p_hardening: 98fb01d1ba: net: add initial `economy.rs`/ resource management sketch upgrayedd: oh okay, shall do it according to that. I somehow missed that when I was reading the documentation so I shouldn't also make microcommits? (not sure cause then the same intent would be across multiple smaller commits) airpods69: microcommits? no. batch stuff that touch same places and/or part of same functionality addition into single commits okay, shall do. also you should avoid random prints you use for debugging in the final commited code. airpods69: It's also fine to push random commits in a PR just to keep a log of the work Later you can always do a git rebase and squash certain commits together so they make sense upgrayedd: yes the random prints would be removed in the final PR (This one is just a draft, so its like if someone runs it then I want them to see what I am seeing here). yeah this brawndo advice is good also, was mainly talking for the "final push" ah yes, I'll squash it in the final push and make a descriptive PR. right now its just like checkpoints to go back and forth and keep up with what I am thinking. (and also how the code comes forward for whoever is keeping up with the PR) bbl in a few hrs airpods69: yeah thats understandable and makes sense, the point was so you just start making it a habit to write proper commits yess, that I will do from the next commit. even when microcommiting, its much better to have a good commit message, so its clear right away what it touched, without needed to look the code then squashing becomes even easier, since you simply mash up all these into a single one, so you can take the commit messages as the changes in the final squashed commit make your future self work easier, he/she/they/them will appreciate it :D : hello from darkirc : i'll be running darkirc tor nodes only for the time being, these two are online and you can add them to your configs if you want: : tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25554 : tor://6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion:25551 : <3 yes upgrayedd, I will follow the advice :D back : nice errorist welcome back gm o/ hows it going? sometimes i spend ages on trivial questions, but rn think i have an answer meanwhile sidetracked hacking on our whiteboard app dbus is such a crap tech : gm philosoraptor: any thoughts on this? https://codeberg.org/darkrenaissance/darkfi/src/branch/p2p_hardening/src/net/economy.rs it's just a sketch rn and the interface will likely change as i start integrating it and i will probs rename to resources.rs rather than economy.rs yeah it's a good start, called it Resource, not Resources ++ : gm philosoraptor: Nice that you hate dbus from actual experience with it could've been done well, a system bus would be cool maybe we're meant to use proc/fuse for that : @draoi pushed 1 commit to p2p_hardening: 6cfbd76f80: net: integrate resource API (dummy values for now) plan9 solves this brawndo: i made it fully p2p now and using zmq https://github.com/narodnik/therapy fun fact, in zmq you can have multiple publishers -> single subscriber model, just by doing the bind on the sub socket instead of pub socket so the app just accepts draw commands like that, and then the python scripts can push directly to your canvas https://github.com/narodnik/therapy/blob/master/pencil.py see PEERS Sweet yoo !list Topics: 1. commit log format specifier (by philosoraptor) ^ in case i'm not here, this is that we have a pre commit hook which enforces a particular format for commit messages that way we can generate a ChangeLog directly from the commit log gm : hey errorist are the above nodes seeds or peers? : ok seem to work as seeds map 1949917 btr 1915595 hashmap vs btreemap speed comparison for 6 keys ACTION drumrolls vec 1190192 i just increase sample size to 100k: btr faster than map vec faster than map vec 114206946 map 215887664 btr 206030136 so vec actually makes sense for low number of values https://agorism.dev/uploads/main.rs 100k keys? no the same size, see the code *sample size ah ty, been meaning to check this for hostlist max sizes (1k, 2k, 5k, respectively) i think for large sizes, hashmap becomes more efficient philosoraptor: quick q is it fine if I change every MerkleTree::new(100) to MerkleTree::new(1) for consistency? the provider usize is the checkpoints size, which we don't really use, so 1 is enough everywhere yes i just copied what brawndo did lol yy no worries will do a quick find/replace everywhere izi pizi copy pasta error well its not an error :D true s/error/mistake : @skoupidi pushed 3 commits to master: 80044e306f: darkfid: fully configurable fees verification : @skoupidi pushed 3 commits to master: 9178923c9e: darkfid: gracefully handle everything in live loops : @skoupidi pushed 3 commits to master: 080417bb3f: chore: replaced all MerkleTree::new(100) with MerkleTree::new(1) for consistency rust borrow checker driving me crazy #safety_first https://agorism.dev/uploads/main.rs i cannot do this in rust ffs what's the point of a borrow checker if i have to use RefCell everywhere gm greetz yeah I think async rust is a bit notorious for being annoying i spent all morning trying to make a tree that takes 10 mins in every other lang realized it can only be done with Arc, so instead moved data out of graph structure, create an allocator and use usize indexes now it won't let me make a function to link 2 nodes because you can only have 1 mutable ref, despite you can do it without the functions completely inline wtf sounds annoying I wonder if other languages are opening up a big possibility for race conditions or other memory issues in that case or if rust needs better ergonomics in matters like this so often the solution in rust is to just use indexes like usize but then you just recreated a pointer hm and it looks like RefCell uses unsafe code under its API too so you kind of end up in the same place lol this is the solution https://users.rust-lang.org/t/multiple-mutable-references-from-vector-content-how/90132/3 unbelievable : @draoi pushed 1 commit to p2p_hardening: 753b3a8160: net: apply arbitrary limit on packet size what if I get the coords as a String format and then played with it to the needed format later after decoder.rs does its job? (in reference to #253) I mean, it is a dirty fix and doesn't really uphold to the flow(it should be a number token), but it should work, atleast in my mind. rationale behind this is that deserialize_parallel was working previously without x and y added to bincode array/stack and it was a string so in theory if I just went with the flow of string, so iter_offset += 1 and += offset after deserializing, it should work. (decoder.rs:149, or somewhere near it, I added some code so line number might vary) 1.5 hours later, It does work nicee, https://imgur.com/a/pUmPaMx decoder.rs decodes the values just fine, gotta put it into the Zkbinary that gets printed now then I can finally move onto actually using the constants well, that was fairly easy lol https://imgur.com/a/j6QYp3I gm http://0x0.st/XHxv.txt unable to compile v0.4.1 on aarch64 gm topia weird that's a rust internal error problem with this crate https://docs.rs/digest/latest/digest/ sometimes these problems go away and are because there's a new compiler version, and the library authors didnt update yet .etc you could try downgrading compiler or upgrading to nightly : gm : hey airpods69: use a [u8; 32] bytearray. There's a method to_repr()/from_repr() oh okay, for the zkbinary right? (yeah Im stupid, my bad) yeah instead of string yep, hopefully shouldn't be a problem to do that. Also question, once this is done, how do we test that generator is working with the provided coords in vm.rs? Not so sure. just add a print to make sure it's the correct value being loaded draoi: can we rename Subscriber to Publisher? darkfi/src/system/subscriber.rs:61 ah fair then, shall make a PR for the whole thing by tonight then wonderful sure test test back hi I've been having issues trying to implement the TryFromt trait into zkas/parser.rs for VarType, here's the code I've implemented (only the first part I'm getting errors on) https://pastebin.com/7iUSP2j5 this is the error: https://pastebin.com/VGXkNDtc keeps on saying expected 1 generic argument for Result any ideas? I've tried String, but no go it follows the Rust by example code, so unsure why it's having issues ah wait, if I remove Self::Error, the error goes away, does that make sense? trying this solution does away with the error, but I get another "expected 'String', found 'std::io::Error'" https://stackoverflow.com/questions/57794849/result-getting-unexpected-type-argument : have you tried to ask AI @deki ? (upgrayedd's "AI is laughable" incoming in 3 .. 2 .. 1 ..) I did ask about the error and it gave me something similar to the stackoverflow, actually think I got it working, the second error suggested this: help: change the output type to match the trait: `std::result::Result I know longer get the same errors after make test, guess that's a good sign : :D deki: you have defined type Error = String;, which means that your Result is actually Result hence the initial error also why is it a tryfrom and not a simple from? like here https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/blockchain/header_store.rs#L55-L61 I was trying to follow the Rust example: https://doc.rust-lang.org/rust-by-example/conversion/try_from_try_into.html but you're right, doesn't make sense not also the original TODO said: change to TryFrom impl for VarType I can change it to 'from' if you think that's better oh no was ust asking since I don't remember lol ah fair :> meant to say *but also in rust example type Error = (); means the Error type is nothing, so it works since it complies as Result so Err(()) is actually (()) yeah that makes sense okay I'll try to sort out the other errors I have, will bbl, thanks for feedback use type Err = Error (the darkfi one not std) ah philosoraptor, I was dealing with Token right? so token field in it only supports string so it will always go as a String into the binary. cant really go into the [u8 ; 32] territory i guess? (maybe could translate it from String to [u8 ; 32] in vm.rs) ah my bad airpods69, no you're correct, not me the binary should be [u8; 32] not string tho String is serialized as length (using VarInt) + byte array, whereas here we just want 32 bytes, no length brawndo: https://gagor.pro/2024/02/how-i-stopped-worrying-and-loved-makefiles/ phiosoraptor: "the binary should be [u8 ; 32] not string" do you mean the coords constant type here? also yeah, maybe some function to make the string only work if data in string is like that of 32 bytes? (not sure) i mean what goes into the binary file is 65 bytes, 1 byte (either 0 or 1) and an optional (if the byte is 1): 32 bytes for x, 32 bytes for y well either 1 byte or 65 bytes nothing more or less afk (offline), cya l8r seems doable, it would be a check for 64 bytes (x, y) in parser.rs which gets triggered if we have an arbitrary constant name. alright, have a good one. upgrayedd: you said to use the darkfi error 'type Err = Error' is that all it is? deko: I don't understand the question s,deko,deki in response to my code from before: https://pastebin.com/gYmEiEVw you said to use the darkfi error type 'type Err = Error' instead of what I had? Just want to confirm if that's the only thing I need to use for declaring the error type? well if its not imported already you need to also import it also don't use std::result::Result, crates Result sould be fine the definition should be fn tr_from(value: &str) -> Result { ... } also don't use VarType::{type}, use Self::{type}, for a cleaner look ah I see, thanks in error bracket(_) don't use format you should be able to use Err(Error::ParseFailed(value)) or some other similar error from src/error.rs okay, so all the error related code is in src/error.rs then? I was looking in zkas/error.rs zkas/error.rs is zkas specific errors, src/error.rs is general darkfi lib errors I see where is the todo you are doing? zkas/parser.rs line 686 ok so you shouldn't use darkfi result error should I just leave it as type Error = String then? no it should be ErrorEmitter btw your impl is wrong you know that right? well yeah, that's the issue I've been having with the line fn try_from(value: &str) -> Result the todo is not for a FromStr or is it wrong elsewhere? sec thinking someone suggested the task as a way to learn more about Rust, and from memory I think they said VarType could have a method from a str yeah so you replace the match with something like: let typ = v.1.toke.as_str().into()?; ret.push(Witness{name: k.to_string(), typ, line: v.0.line, column: v.0.column}); so your TryFrom impl should match exactly the match therefore since the match uses Err(self.error.abort(..)) your tryform should use that so Error = ErrorEmitter or just ErrorKind::Other should be much simpler test test back okay, thanks for the feedback think I get it. I'll re-try that, also need to reset my VM because it's acting up will bbl glhf b, checking logs welcome back o/ > seems doable, it would be a check for 64 bytes (x, y) in parser.rs which gets triggered if we have an arbitrary constant name. ah yeah this is a better idea actually than the bool philosoraptor, thats what I am going to do. First step, get vm.rs and fixed_bases.rs working with the constants. The checks can wait since those are checks and won't really bother the functioning but only make sure things work. noice bruh ;) ohh also also, I'd need help with this check. https://codeberg.org/darkrenaissance/darkfi/src/commit/27feb7b4446c7471c5f378c8f64e9a7b45cacdc7/src/zkas/parser.rs#L596 (it was the same as:D) s/:D/"witness check" in the match case at vm.rs:655, if we have arbitrary constant names, then it should actually replace the error for invalid constant name (since that would be handled in parser.rs while defining. If not valid then assume arbitrary constant name and then check for x and y accordingly? if failed then zkas will start screaming about it instead of this reaching till vm.rs) guess I'll replace it yep, whats the worse that can happen 1 paul: o/ lol, hi I'm a noob to irc too much time on discord, etc same, dont worry bout it xd I saw a post on reddit looking for devs (https://www.reddit.com/r/rust/comments/1bpg8b8/official_rrust_whos_hiring_thread_for_jobseekers/kylb7de/). I might be needing a new role in bit so I figured I'd sit in on the meeting tomorrow. gm gm o/ another day I'm glad this project isn't on discord me too although I still need to setup an always on machine instead of this laptop cause I haven't switched to windows to play some games for ages now. I would have appreciated some centralized stuff XD /s lol yeah same, I've been running of VMs but you can get it on an android phone yeah, gonna set it up once I'm done with this PR. gm gm gm : gm, just got back, tried to build new dockers, nice work ! : {x86_64,aarch64}_{almalinux,fedora,ubuntu,debian,rocky,oraclelinux}_2024-04-27_080417bb3 : arm0427.080417bb: does it work? : i didn't change anything lol !list Topics: 1. commit log format specifier (by philosoraptor) gm gm sir gm gm !topic darkirc migration Added topic: darkirc migration (by brawndo) draoi: Is lilith supposed to be doing this? >>Whitelist is empty! Cannot start refinery process basically means it does not have knowledge of any peers it's valid behavior when the whitelist is empty, however ideally on a healthy network lilith should have a hostlist consisting of whitelist peers etc it could just be bc the network is small rn maybe a warning is too harsh here, should just be a debug msg or info !list Topics: 1. commit log format specifier (by philosoraptor) 2. darkirc migration (by brawndo) ok : @draoi pushed 1 commit to master: af72f67309: lilith: change log level from warning to debug for empty whitelist !list Topics: 1. commit log format specifier (by philosoraptor) 2. darkirc migration (by brawndo) upgrayedd: my latest code changes for zkas/parser.rs https://pastebin.com/FgWtqREP (I've removed some code from parser_ast_witness so it's shorter to read) make test no longer gives me syntax errors but I do get this: error: failed to run custom build command for `yeslogic-fontconfig-sys v3.2.0` is that something added recently? no, thats propably something to do with your rustv also why is this not a PR? we should "review" over pastebins because I was getting compilation errors, so wanted to fix them first, wasn't sure if this yeslogic thing was part of that can do one now on codeberg deki: its fine to put code in a WIP draft PR I believe (atleast thats what I am doing, just follow the commit guide cause I wasn't doing that and upgra*yedd pointed that out) ok ty : @skoupidi pushed 1 commit to master: d9f1753381: contrib/localnet/darkfid-single-node: updated README.md with drk functionalities testing table !topic drk Added topic: drk (by upgrayedd) : @dasman pushed 1 commit to master: a822c89085: bin/deg: fix a bug in graph column that caused events in the same layer switch places leading to inaccurate graph : @skoupidi pushed 1 commit to master: e956ee71f2: drk: bincode rpc retrieval fixed, transfer tx generation fixed hey might suddenly drop out of the meet since waiting for an appt hey terry gm! ugh i realized that if we give the MessageDispatcher the ability to read from the stream directly, rather than going via packet, we need to implement a generic stream Reader on the dispatcher basically adding everywhere like to all the parents etc gets really messy yo gm o/ o/ !list Topics: 1. commit log format specifier (by philosoraptor) 2. darkirc migration (by brawndo) 3. drk (by upgrayedd) Hi holla hello should we start? hi gm draoi: just the methods not the struct !start Meeting started Topics: 1. commit log format specifier (by philosoraptor) 2. darkirc migration (by brawndo) 3. drk (by upgrayedd) Current topic: commit log format specifier (by philosoraptor) hmm tnx terry, will try that for commit msgs, can we add a pre commit hook to enforce a certain format then we gen changelog from commit log component: message +bugfix +othertag then next para is anything this way we can filter noise from commit log, see which ones are important I'd prefer if we follow the existing format we've carefully been using Adding tags in the first line of the commit message is awful ++ You can do it in the next lines of the commit message, e.g. sdk/crypto: foo bar Tags: bugfix, improvement last line could be good Follow the git standards of having such things in the commit messages Key: Value ok great Where is this going to be documented? So your hook could grep '^Tags: .*' i'll put a file in proj root with the list of components allowed in the prefix. then it must match exactly https://darkrenaissance.github.io/darkfi/dev/dev.html ^ we'll add it here ok next? (thumbs up emoji) !next Elapsed time: 6.1 min Current topic: darkirc migration (by brawndo) Should we migrate to darkirc and deprecate this one this week? I feel it's been quite stable there is one persistent issue, which is the lag on stop lag? ACTION also been doing infra cleanup so we will have everything in place my weechat rejoins several times on start afk ++ same https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html#mainnet-tasks "Currently closing DarkIRC with ctrl-c stalls in p2p.stop(). This should be fixed." yes i have the same issue wrt weechat rejoining draoi: I haven't observed that in darkfid, so maybe check if handling is different? yes someone should i will nice I haven't been having any issues Granted I'm running the node 24/7 it does seem mostly stable I experienced it couple of times, it's not persistent But yeah all in all we should migrate I'd be happy to try the migration, can always come back here. I don't have a daemon running all the time so would appreciate the chat replay feature okay aside: Is there an issue tracker? Tracking issues in an md book feels inefficient. yeah same Then will finish setting up nodes asap paul: bin/taud nice terry: will check https://darkrenaissance.github.io/darkfi/misc/tau.html yeah i will do some node deployments as well !next Elapsed time: 9.6 min Current topic: drk (by upgrayedd) dont want to hijack topic but working on this https://codeberg.org/darkrenaissance/darkwallet was fighting borrow checker last few days thats a gui right? : dasman added task (FQsM3c): handle darkirc tasks stop. assigned to dasman yeah its for gui well ain't it going to use drk as its "backend" ? use as in import, right? if drk is a library well you can call it directly like a system call but drk as a library is already suggested in a PR so it might make more sense Yeah have the lib provide the necessary RPC handling yeah its much easier if we could make a libified way to run programs like daemons as well rather than i have to write win/mac/linux/android process management lib and some way of setting/getting the settings so it can be configured in the menu I'm against a do it all app... anyway thats a future discussion, back to drk? it could even just use localhost rpc, but its running the daemon yeah sure so just q quick update: darkfid is pretty much finished, so now I will focus on drk functionality, verifying everything is supposed to work, adding fees to txs, testing txs, etc. etc. yay but some stuff is missing so will need some "help" doing them, like dao calls, token mint stuff Yeah terry could help you with the DAO impls happy to chip in where needed I can likely deal with token mint check contrib/localnet/darkfid-single-node/README.md im will be a bit slow tho it has all the info needed on what should be tested, and general status !topic event graph tooling Added topic: event graph tooling (by terry) !topic philosophy meeting Added topic: philosophy meeting (by ash) next? !next Elapsed time: 12.5 min Current topic: event graph tooling (by terry) we mentioned migrating to darkirc and also about taud, just noting that we need tooling here to export the event graph, and be able to replay tasks. currently we have the UI explorer though which is already a big step ok noted !next Elapsed time: 1.1 min Current topic: philosophy meeting (by ash) Hey! Just wanted to remember that: At 1st May 14:00 UTC (this wednesday) we will have a philosophy meeting where we will discuss a text related to free software. terry: it's already added, it's an open task: tau SjJ2OA ash: ok can we link the pdf? ah ty dasman Here is the link to the text: https://www.researchgate.net/publication/290120192_Free_software_philosophy_and_open_source with taud, i had some issues with events not being consistent across instances. i didn't look into it with the debugger tho is there a pdf or just the webpage? There is a button in the platform to download the pdf doesn't require log in or register draoi: could you please send me logs draoi: yeah exactly we need to be able to pinpoint and triangulate errors when they occur. whether they are net errors or event graph ++ ash: i tried doing it but it was asking me to register i don't have the logs, it was from a while ago, however reka may have them really? strange, a sec oh it works for me now (different device) will contact them, ty : @skoupidi pushed 1 commit to master: b97fc9ad83: contrib/localnet/darkfid-single-node/README.md: fixed table formatting terry: Did you downloaded? yep ty excelent good, see you this wednesday <3! !next Elapsed time: 4.2 min No further topics i think researchgate limits your number of downloads, and i had on my laptop already reached the limit !end Elapsed time: 0.2 min Meeting ended ty all, cya next week thanks everyone o/ o/ o/ \o o/ |o/ \o/ Thanks, good meeting ACTION back to janny ansible ACTION does it for free o/ : @dasman pushed 1 commit to master: dec82c1639: doc/tau: update tau clie naming ty janny, your service is noble Huh, what's janny or who is janny? Janitorial workers It's when you have to deal with deploying infrastructure :D Ohhh, got it Thank you senor for your service. lol How do you brawndo isnt senoretta brawndo, Thank you senorita for your service. lol terry here? terry, I don't know that. I am lazy and was trying to write less xD upgrayedd: i'm back blud terry all good chill ACTION sweating rushing to computer to serve m'lord whew wtf my isp's ipv6 just stopped working gm gm o/ brawndo, how's it going? 06:43:13 [ERROR] [P2P] Broadcasting message to tcp+tls://acab.accesscam.org:26661 failed: Channel stopped draoi: This happened on this same node and acab. is in external_addrs terry: git clone and cargo run to see the error https://github.com/lunar-mining/unsafe_trait/blob/main/src/main.rs lunar: The generic needs to be either in another trait, or defined on the trait itself thanks brawndo, any other info around the error? if it was in debug mode would appreciate you share the logs It wasn't in debug mode, I'll run so now I do see it being filtered yes so here you can see the changes required to make this compile: https://github.com/lunar-mining/unsafe_trait/commit/18d843a7ebb43ceaece6cabb48e16192d209ed87 https://termbin.com/ai3v Yeah like that so practically for our purposes this means adding a generic reader R to the MessageDispatcherInterface trait, the MessageSubsystem struct and all its parents (Channel etc) What do you want to achieve with it? basically rn we have the reader inside message.rs, and we read from the stream, convert to a Packet, send it to the MessageDispatcher which decodes it to a Message and dispatches it however terry was saying it would be better/ more efficient to just read from the stream to message directly, which means adding a reader R to MessageDispatcher hm Can perhaps `Packet` be modified in that manner? ah make Packet a Reader, maybe that could work No, make packet contain the struct you're expecting I think that can be done with generics Something like that, sry just woke up so not at full capacity :D nw ty for input If it can be done in Packet, then you'll still end up with two clean read/write functions As opposed to passing the streams around everywhere so i think the efficiency gain terry was talking about is to avoid this intemediary step of writing to packet, then changing to message, cos you have to allocate the intermediate resources required for packet but i guess you're saying give Packet the ability to deserialize to Message directly Yeah I think `Packet` as-is would be gone Then read_packet would return your Message And write_packet would take your Message Do experiment, I think it could work With carefully placed generics ++ gm ah ic ACTION googles "for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically" gm so i have been experimenting with this, and if we implement it as follows: read_packet() becomes read_message() which reads a Message from a stream, and send it to the dispatcher in channel.main_receive_loop(), we need to add the generics R and M to Channel which is annoying the other thing is perhaps we can move the dispatcher call into read_message() terry: https://doc.rust-lang.org/reference/items/traits.html#object-safety violates these rules basically no i think the stream needs to be Box basically https://agorism.dev/uploads/main.rs because generic functions doesnt exist until it's called but we're using vtable polymorphism, so the functions need to exist ahead of time but because it's generic we don't know all the possible functions ughh still trying to wget that code 3min later, my internet sucks lol so dynamic (runtime) dispatch is the way to resolve this ah ok i see it now Is anyone connected to darkirc rn? yes i just said sth i see terry and airpods in there Yep Just the bot is off : gm : Anyone here? : test : test : test : Anyone here? : echo : yes : test test back : yoo test back : hey test back : ohai echo back : i see u guys : Just the bot is down then : i'm gna be deploying some more nodes today/2m : Echo Echo back Bots were down Fun trick you can change your nick to 'testbot' you'll enter stealth mode, mirror bot won't detect you : :D dasman: nice <3 dasman: wait so the bots could be very well be down and someone is replying to us to create an illusion, eh? XD airpods69: the cost of changing nicks at will :D airpods69: hahaha we dont get pinged this way though `[u8 ; 32]` would look like `[0 ; 32]`, right? then how would the coordinate look like when put into zk file? philoso mentioned that it would be like 0x000... so now I am confused cause `[u8 ; 32]` is what `vm.rs` likes. EcFixedPoint MY_CONSTANT = (0x000.....1, 0x000.....2), Then you'd decode the hex into bytes gm thanks brawndo, getting back to it. somehow I forgot it was a hex while knowing it was a hex. Weird lol : gm : gm gm, happy may day :D https://en.wikipedia.org/wiki/Beltane draoi: ohhh, makes more sense now, Thanks. Over here, we celebrate the arrival of spring. Got really confused since its "Labour day" today over here and I was like "? damn is it the same reason for a happy may day" rust adding a new return Err(Error::...) syntax: https://github.com/rust-lang/rust/issues/96373#issuecomment-1328631967 ZkBinary with hex values after check looks like this, which is what we want right?? Those are randomly generated hexvalues, please dont look too much into it: https://imgur.com/a/vxrnSyZ "0x00...0".[2..] is put into the binary and vm.rs can convert it into [u8 ; 32] bytes which seems right? : i cant connect to ircd anymore : lilith seeds are down : terry: somewhere in the logs, I think brawn-do gave peers to connect to. : Lemme fetch it for ya (not on Linux so can't fetch it) : Added this to peers and it worked for me "tls://acab.accesscam.org:25561" : ty i'm back in greetz yay made it back in : Welcome back both of ya I was uninstalling my VM, re-downloading everything, it was a journey : For some reason, darkirc doesn't recieve messages but sends it if I'm running it on my android : Bwahaha glad you got help : that shouldn't happen : yeah ty janny : draoi, Magic, it happens xd airpods69, that was happening to me last time I tried darkirc. I think there's a reason for it but forgot why : Im reading the telegram logs to reply lol : logs also here https://agorism.dev/log/ : Title: Index of /log/ : Ah fair, I'll check it out later deki. I was afk and just opened to help out and phone was the fastest way. Afk again (won't receive dms) : Still not receiving dms on ircd but that's a different story lol (ircd isnt running xD) yeah I nuked my original ircd on here so if you try to send me a dm airpods69 I won't get it upgrayedd: I made a PR for the todo in parser.rs, someone already gave me feedback but tagging you in case you have your own input: https://codeberg.org/darkrenaissance/darkfi/pulls/254 deki: input looks good, follow it. ++ : @skoupidi pushed 1 commit to master: cd4655bb62: src/contract/money: delayed_tx test added gm test test back gm deki: how's it going :D going alright thanks, up and early with my coffee :> wbu? pretty good here too, had my coffee a while ago, time for breakfast and then coffee again xd haha nice hey hello yolo : gm : gm draoi so are you guys going to 'sunset' ircd? And shift permanently to darkirc? deki: we discussed this on monday meeting we are going to migrate over, so are starting node deployments, but are still polishing and testing some darkirc stuff yeah I remember reading it, wasn't sure if a final decision was made, thanks for update are constants a new feature in Rust? Reviewing the Rust book and they have a Constants section in ch 3.1, don't remember seeing it there when I originally read it I'll be afk for a bit, going back to my friend's place coz my battery is low : @skoupidi pushed 1 commit to master: 07606d27e4: drk: added fee call to transfer draoi: is it normal for the p2p test to be stuck in discovery sleep and retry? not all the time, some times it does : @skoupidi pushed 1 commit to master: 73a159ef83: contract/money: properly integrate txs fees into block rewards not sure what you mean, probably not can you send logs? or any more info /nick nighteous oops alright better gm gm nighteous how's it going deki? I'm going okay thanks, sitting in a cafe sipping my coffee :> wbu? trying to finish off a Rust task too :3 eating breakfast and waiting for the milk to get delivered. Gonna have more coffee. ohh nice, what are you working on? sweet. It's a TODO task to change an implementation in one of the files, also an opportunity to learn more about the language coz I'm kinda new to it unsure if you're familiar with Rust or the project, but you can see what I'm doing here https://codeberg.org/darkrenaissance/darkfi/pulls/254 checking it out. apologies, had to do some chores np oh boi... either you or me would be facing a conflict cause of the PR (either #253 or #254) XD ah nooo lol oh man this will affect what I'm working on lol that's okay idk I'll get mine to a working state and leave it to the devs to decide its fine. whoever gets it done and merged first, the other one can just change accordingly. Shouldn't be a big deal. Just some token stuff to be changed here and there no problemo. sweet, so it's a race (kidding...maybe) ahaha, a few cups of coffee is all i need then XD hehe :> though I still need to plan out the vm.rs part of things. Shouldn't be that much but lets see what I can come up with. go for it shall do senor :D afk gm gm upgrayedd: i think we should put a fee field in money::transfer(). then it's only allowed to be nonzero if the token_commit = hash(DRK, 0) (which can be checked in wasm) that solves the issue : @draoi pushed 2 commits to p2p_hardening: 98c8d276aa: net: fix message deserialization : @draoi pushed 2 commits to p2p_hardening: aaceccd494: net: write Message directly to the stream and fix read... gm do you guys think it's worth knowing WebAssembly integration? there's an online book that isn't long: https://rustwasm.github.io/docs/book/ gm Not really, that's more for web dev ah right, I shall ignore it completely then although my first ever language was HTML, if you think it's a programming language that is >.> draoi: We're still checking the p2p protocol version upon connecting, right? I mean the app version exchange yes we do a version handshake okay I'm thinking of changing the darkirc version to 0.5.0 if that's fine with everyone Would mean ppl have to recompile makes sense : @parazyd pushed 1 commit to master: 53834bbd9b: darkirc: Bump version to 0.5.0 ok there's a lilith and a node running If anyone wants to try recompiling now Try using only "tcp+tls://lilith1.dark.fi:5262" as a seed A node is running on tcp+tls://irc1.dark.fi:26661 And also recompiled one on "tcp+tls://acab.accesscam.org:26661" ok sure i just ran it with my existing hostlist and weirdly got this: [DEBUG] (3) net::protocol_version::send_version(): App version: 0.4.1, Recv version: 0.4.1 which is unexpected, also seemed to connect to older nodes fine ./darkirc -h says 0.5.0 ? oh wait 0.4.1? That's the lib version I suppose net/settings:87 doesn't work as intended darkirc is reporting 0.5.0 so yes i guess we found a bug :D It used to be 0.4.2 even (See my commit diff) So there's a problem indeed if you're seeing 0.4.1 yeah so i guess settings:87 is reading the root Cargo.toml Right : @parazyd pushed 1 commit to master: 4dda409e50: darkirc: Use binary crate version in p2p app_version Let's see if this works I think it's somehow not enforced? net::protocol_version::send_version(): App version: 0.5.0, Recv version: 0.5.0 : helo : helo so the code you pushed works but as you say it doesn't seem to be refusing connections from non-compatiable versions : hey 08:14:53 [DEBUG] (3) net::protocol_version::send_version(): App version: 0.5.0, Recv version: 0.4.1 : Hey I c u mad : hey, i didn't update my config with the newer nodes tho lol : we do this check tho in the version handshake: if self.settings.app_version.major != verack_msg.app_version.major && self.settings.app_version.minor != verack_msg.app_version.minor {... refuse connection } Yeah that'd check the 0.5 hmmm oh but the logic is wrong lol It uses AND ahhh should be OR right kek : @parazyd pushed 1 commit to master: 41e87e3aee: net/protocol: Fix version exchange <3 yay So it always passed because major was always 0 fml that CTRL-C lag is so annoying oh lol Yeah I wonder what keeps it stuck It doesn't always happen ok I'm migrating mine to 0.5 proper now same tho i gotta run in <5m *nod* [ERROR] net::protocol_version::exchange_versions(): send_version() failed: Channel stopped nice (on old hostlist) gr8 Starting mine in a sec woosh so many rejections :D Interesting the nodes keep trying we can modify that ok i see you on darkirc but it seems the bots are on the other version Yeah now we're in a different network than them ++ gtg afk for a bit When dasman is around I'll ping to update cya o/ hey can i move the ui work under darkfi/bin/, rn it's in a separate repo https://codeberg.org/darkrenaissance/darkwallet or do we prefer to keep it separate (for now) Sure, why not are there TODO tasks for the wallet? Always wanted to contribute to developing a wallet not yet, i'm in the move fast and break things phase btw does it matter which rust version we use when running make test/check/clippy etc? I'm on rustc 1.80.0-nightly philosoraptor: only way to go I"m getting this error for make check, do I need to do a cargo clean or something? It shouldn't matter as long as it's rust nightly https://pastebin.com/ZrQxscZ8 brawndo: thanks Yeah maybe you want to run `make distclean` ah ty, will try that soon, brb philosoraptor: what issue that check solves? we already check fees are non zero etc b o/ I'm still getting this 'invalid metadata files for crate derive_builder' error when running make test/clippy for my PR, no other errors come up https://pastebin.com/ZrQxscZ8 I tried make distclean, anything else I can try? clean your cargo cache, this is probably due to version missmatches in pulled crates have found other people with the same type of error on google but nobody has indicated whether they resolved it okay tnx also, always include what you run and the full execution log, not just the file okay will do s,file,error brawndo: will update rn draoi: the lag happens in p2p.stop(), not always, but when happens it's between 10 to 15 seconds Hey dasman ok thanks Use "tcp+tls://lilith1.dark.fi:5262" for the seed We should make sure it works ++ after running make test it's telling me it can't find a crate for 'tor_rtcompat' log file here: https://pastebin.com/2kTJ2NQm but it does exist in Cargo.toml except by this name: tor-rtcompat it's being used here: src/channel/handshake.rs:15:5 should I change it in the code? Or leave it? deki: no don't touch the code did you clean up the cargo cache? upgrayedd: yes I ran cargo clean, and make distclean can try agai Maybe the crates should be updated yeah I found a command 'cargo update -p tor-rtcompat' keep in mind tor_rtcompat wouldn't work thats cleaning the local stuff not your global ones oh I see, how do I do that? cargo update modifies the locked version hence the missmatch : Test Test back you are probably not using the repo Cargo.lock, or rather you modified it : werks hmm I see : @zero pushed 1 commit to master: a1f891d45a: added darkwallet in bin/. See README for usage instructions upgrayedd: then you can pay for fees like you wanted within the same money::transfer() call this way, it's always possible. we could add it to otcswap too philosoraptor: revert commit you have local paths commited oh fixed, i did a hard push ty : @zero pushed 1 commit to master: c78a469c83: added darkwallet in bin/. See README for usage instructions :D should i use path = "../../" instead of git = "coderberg darkfi" ? philosoraptor: what do you mean pay for fees like I wanted? actually that seems better yes re path, since its in the repo you can use the relevant/direct one : Hi : echo : hey echo back : Nice upgrayedd: so you know we want to within the same tx make a transfer and pay a fee, it's possible if money::transfer() has a fee field i don't see those darkirc messages, i see completely different ones in #dev philosoraptor: You should read the backlog here https://agorism.dev/uploads/whatisee.txt ah ok tldr; update to latest darkirc and use "tcp+tls://lilith1.dark.fi:5262" as seed philosoraptor: how would the fee field help? ++ upgrayedd: because a call can pay its own fee unless you complicating it in the sense allowing both a fee and/or transfer to act as fee calls yes hmm brawndo what you think? The current system doesn't work because of the Merkle tree changes : @zero pushed 1 commit to master: 927c0724a8: wallet: Cargo.toml make darkfi-serial use ../../ instead of git url Since the output you'd be spending for the fee isn't valid Or rather, isn't using a valid Merkle root yeah exactly the problem tho is still for chaining calls that consume previous output adding the fee field in transfer only complecates fee hanlding, and just solves the single coin in a wallet issue what's the usecase? not the core problem the call chaining works for the dao and protocol owned liquidity .etc example chainning that doesn't work: I want to make 2 transfer calls in the same tx, in which the output for the first is the input of the second call so for example flash loans? yeah or multi transfer calls in general when you simulate/validate the tx its valid since the root of second call is: current_tree.append(call[0]).root() which is a valid historic root for the run but if any other tx mutates the tree before your tx is included in a block, the root is not valid anymore btw have you tested the dao tx in real scenario? aka blocks mutating the merkle tree, before the tx is included in a block? (like the test pushed yesterday showcasing this) i think we could achieve this by modifying money::transfer() to for example have a special input/output yeah but again, that just solves the single coin issue, not chainning i think the dao works since it doesn't depend on this behaviour, and the tests all applying everything atomically contracts should be designed with this in mind atomicity is not in the context per se more like linear execution if you only test the linear execution in mind, then they are always "valid" Well you can never create an output and use it within the same tx It only works if no txs happen in the meantime well yeah, but we have to state that "drawback" of the system We should fix it how? thats what we discussing :D if your contract can use a value within the same tx, then if it's not in the tree, then check if it's in that tx so for example the tx could have a second merkle tree and you check that inside zk so it's either in the main merkle tree or within the same tx proof would be invalid tho since it used the old root no it's valid since it's just a merkle root during that tx merkle root of what? the coins of the previous calls The existing coins tree that root might/will never be a valid historic root It doesn't have to be it's the root within the current tx philosoraptor's correct not within the entire coins set That's a good idea aha ok gotcha, so like just a mini sub tree It's a clone whose lifetime is 1 tx yeah but and that tree gonna be having just the tx calls? how would then the chainned call output be valid, since the used root is not part of the main tree? you have to do 2 membership proofs within zk, and one of them must be valid aha one for the main tree, if it passes all good, if not, check second, which is just the tx calls tree? Wait well that still requires a full sync I think yep, up until that current call e.g. You create that tx and submit it, but another tx enters a block before yours does Then you wouldn't have a valid inclusion proof because the position would be different yeap exactly! your root will be before their tx entered the block Which in turn would produce a different root the root is just for that tx it doesnt include any coins outside the tx philosoraptor: thats true only for the first call root that one passes no problem, since the root it used is a historic one the issue is for any other next call, since it used that history_tree.append(call[0]).root() which is not a valid root as the other tx mutated the tree there are 2 trees, the tree for all coins (which we use a historic root for), and the tree for this tx (which we always use the current root up until that output including the previous calls) the historic root is as is the tree for this tx is always deterministic since it only depends on the tx data which we compute how are you going to make the zk proofs with something that you don't now? aka the current root? i do know the current root for this tx it starts from empty set {}, then we add coins to it {c1, c2, ...} but it doesnt include any coins not in this tx yeah but since its not a valid thing for the main tree, what prevents me from double spending or double minting stuff? the nullifiers must be unique across the entire tx ok going to prepare lunch, will check here in 30 mins here will there be any issues when using the coins generated by the consecutive calls, which use the tx merkle tree and not the main one? : @rsx pushed 3 commits to master: 663712f8b0: wallet: only allocate texture once. Instead use a ResourceManager : @rsx pushed 3 commits to master: e76be71d4b: cargo fmt : @rsx pushed 3 commits to master: 05a797ec7e: wallet: add convenience methods add/set/get_property_type() that make code look much cleaner consecutive calls should not have their coins added, only the previous calls within the tx philosoraptor: I mean when we want to use the "final" output of such a tx like: transfer with two calls where second call consumes first calls output, then later we want to consume the seconds call output in another tx then the other tx will generate the same nullifiers as the 2nd call, and this will be rejected philosoraptor: 1) What the new tx consumes the output of the second call, why would its nullifier be revealed? the nullifiers are added to the db after every call you won't be able to consume the same output within a single tx I'm talking about consuming the txs second call output to another tx thats not consumed the outputs of the 2nd call should be spendable by other txs the outputs are unaffected the money contract would need some kind of redesign tho to include the second merkle tree which is always a new empty one for each tx, and gets mutated by each call yep the real question tho: does it impact the security model? since the second merkle tree will only include tx call stuff we'll have to think it over btw fee can also do the same check in the tx specific tree, so we can have self paying txs using call output : @skoupidi pushed 1 commit to master: 5f5cfbafa8: validator: random tx handling fixes : @skoupidi pushed 1 commit to master: 7a891b0b90: blockchain/mod.rs: minor cleanup hi, newbie here. i have a basic question: as darkfi supports user-defined smart contracts, how are contracts deployed to the chain? or is contract code sent as part of public input (instance) inside txs to a universal verifier (VM)? godel: they deploy the was bincode of the contract using the native deploOor contract and txs for that contract define they call that using the contract id s,was,wasm the VM knows which bincode to load by the tx defined calls, and executes the corresponding(predefined) functions of the contract itself thank you, upgrayedd upgrayedd: oh, i suppose this is not covered by the docs. thank you i'll take a look at the contracts code still geting this 'invalid metadata files for crate 'derive_builder'' error, I used these commands "rm -rf ~/.cargo/registry, rm -rf ~/.cargo/git, rm -rf target" the error message tells me the rmeta file it can't open, should I just delete it? note: failed to open rmeta metadata: '/home/ubuntu/dev/darkfi/target/aarch64-unknown-linux-gnu/release/deps/libderive_builder_fork_arti-0a79f3f660686a80.rmeta' or delete the whole deps folder? whole deps folder or even better, start from fresh repo did you by any chance run anything with another/priviledged user? I'm pretty sure I did, because I was uninstalling my VM, reinstalling etc when ircd was down thinking it was my VM at fault there is your issue, wrong permissions. anyway going afk, glhf thanks for guidance gm gm anyone know what this could be after running make clippy: error: failed to run custom build command for `randomx v1.1.11 ? nevermind I didn't have cmake installed, the bane of my existence cmake is always makes a comeback ooh make clippy is nearly there, now I'm onto library errors that bring me back to my cross-compiling days (last year) and the joy that was haha hopefully you figure it out soon I have 79% battery left with no access to charge for a while, we shall see race against the clock eh? yeah or I go back to going through the Rust book and pick this up later tonight :3 hmmm, race against the clock gm gm weird my UI gives strange flickers on rasppi but the other computers are fine probably some incompatible graphics api? if it happens in other applications in the rpi, then could be your hdmi cable or whatever you're connected with hey so i've done the message subsystem refactor so that it operates on a stream/ Message directly rather than going via packet i have basic encode/ decode working on Message in this new setup however i'm getting stuck trying to extract the length from the stream and use take() to limit what's read according to the reported length rn i'm manually reading the first byte and assigning it to a usize (the same logic as VarInt) (keep in mind this is a test case so not thinking about magic bytes etc rn) nice work draoi, how complex was the refactor would you say? one sec just finishing this Q then once we have the VarInt assigned usize, we have to cast it to a u64 cos that's what take() needs this all works fine but then the decoding fails am i missing something? draoi, just read the VarInt you don't need take sorry yes you need take deki: to answer your Q, the refactor was ez, the (de)serialization is tricky cos i have a pea brain but just read the varint idk why you're reimplementing varint, it already exists in async_lib.rs so just read it nice :> it's bc when i call M::decode_async() after doing the VarInt::decode_async to extract the length the decoding fails i thought it was bc of the duplicate VarInt decodes since M::decode_async() also triggers VarInt encoding under the hood no it's not it doesn't well my print messages beg to differ :D ok give me an isolated test case let M = u32 sure i have a unit test i can just push it, in tests/ for example and give me usage instructions i'll be afk for a bit : @draoi pushed 1 commit to async_decode: cfd14e32c2: tests: isolated test case for failing M::decode_async() git fetch, git checkout async_decode cargo test --release --all-features --workspace message_encode -- --exact --nocapture the first test it just to show a working example, where we call encode/ decode on Message/ String w/o attempting to extract the length : @draoi pushed 1 commit to async_decode: a20c44a480: tests/message_encode: cleanup brb grabbing coffee b written += 110u32.encode_async(&mut buffer).await.unwrap(); Wrote bytes: 4 if the varint was being written, then it should say 5 draoi: this is wrong because you never write the length of the payload written += name.encode_async(&mut buffer).await.unwrap(); written += testmsg.encode_async(&mut buffer).await.unwrap(); you need to write testmsg to buffer2, VarInt(buffer2.len()).encode_async(), then buffer2.encode_async(...) oh daym : @rsx pushed 1 commit to master: 266ecc2ed1: wallet: add scan_dangling() method used for garbage collection of nodes In fixed_bases.rs, if I want to generate u and z, then I should use `find_zs_and_us()` function (halo2/halo2_gadgets/src/ecc/chip/constants.rs:117). This function takes in an CurveAffine, not really sure how to get it. Honestly, does make sense but it does not? Weird, I know it is supposed to be generated using the (x, y) public keys that I added to zk a while ago (#253 for context). x and y are [u8 ; 32] so thats done, a hex of 32 bytes. Idk how to convert it to CurveAffine or how to generate G out of it. ;-; : @rsx pushed 1 commit to master: c475c198e7: wallet: begin reorganzing py code into a submodule : @rsx pushed 1 commit to master: 5777c0ffe6: wallet: draw a rather attractive looking box with a lush gradient : @rsx pushed 1 commit to master: 85ceda586e: wallet: make the beautiful box resize but preserve border width airpods69: convert to Fp elements using from_repr(), then convert to CurveAffine (grep around in halo2 or darkfi code for a snippet) rg from_repr darkfi/src/ oh shoot yes, from_repr, I forgot about it, my bad. Shall do! thanks alot. ++ : @rsx pushed 1 commit to master: 2623d0e4cf: wallet: make screen coords consistently use pixels everywhere : @draoi pushed 2 commits to p2p_hardening: 975d0aabb9: net: working bounded Message (de)serialization : @draoi pushed 2 commits to p2p_hardening: 0c0f7da473: net: cleanup de(serialize) Message refactor zz 6+ gm : gm hihi !list No topics i figured out how to print lines in python while accepting user input https://agorism.dev/uploads/ctl.py like bluetoothctl does nice for CLI tools where you can accept user commands frieren-elf64: can i rm message_subscriber::message_subscriber_test()? rzn: due to the refactor (removing packet) we need to pass a PtStream around notify() and trigger() calls meaning the only way to get this test working is to create actual channels etc, i'm thinking it can be moved to net/tests.rs integration test actually that won't work since the Stream is private to Channel... thinking we can do this using a Connector or at least copying the code in Connector into a unit test (and would need to have an acceptor running on the other side etc) so we would need to start an inbound node, manually connect to it (copying Connector code), setup dispatchers on both nodes, then proceed w test : @draoi pushed 2 commits to master: a5c93b8f82: channel: add start_time to ChannelInfo : @draoi pushed 2 commits to master: 78bb9f554e: net: remove intermediate Packet type... gna put aside the Resource Manager work (on branch p2p_hardening) for a bit to focus on issues on master- p2p.stop() lag on darkirc, weird behavior on test reported by upgrayedd (tho more info here would be useful) draoi: lag happens specifically in stopping outbout slot https://codeberg.org/darkrenaissance/darkfi/src/commit/78bb9f554e6e82b95e7039f5fabbaf4f9c1746ff/src/net/p2p.rs#L159 not sure why tho draoi: i dont think i wrote that test, but sure https://codeberg.org/darkrenaissance/darkfi/src/commit/78bb9f554e6e82b95e7039f5fabbaf4f9c1746ff/src/net/session/outbound_session.rs#L192 thanks dasman, it could be bc the slots are looking for available addrs to connect to, which involves some locks etc did you notice how long the lag is/ does it vary/ is it completely blocking CTRL-C? probably, bc I also noticed after hitting ctrl-c the node still made a connection to a new peer doesn't happen always but when happens it's always between 10 and 15 seconds interesting ty, i'll run some tests and try to fix this week tysm bbl o/ b o/ draoi: thread '' panicked at src/net/channel.rs:290:61: called `Result::unwrap()` on an `Err` value: Kind(UnexpectedEof) note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace gm omg tnx dasman : @draoi pushed 1 commit to master: 0480a2ecb2: net: correct error handling in Message serialization upgrade apologies, was squashing commits from the other branch and some older code slipped in hey fyi 78bb9f554e6e82b95e7039f5fabbaf4f9c1746ff is an optimization which makes nodes read and write Messages to the stream directly instead of using Packet but that means nodes running that commit and above can't talk to nodes running commits before then why? well nodes before then will be sending Packets and trying to deserialize Packet it should be the same [magic:4] [cmd_len:varint] [cmd:...] [payload_len:varint] [payload...] doesnt matter if i put it in a struct and read that, or read each one individually we weren't writing the cmd len to the stream before yes we were you don't need name_buffer https://codeberg.org/darkrenaissance/darkfi/src/commit/2623d0e4cf2819af10c6f58f303b7bf2e929d703/src/net/message.rs#L167 also the names are wrong, should be (command, payload), not (name, msg) the packet has fields: magic, command, payload ^ this is the code before upgrade, it just writes magic followed by command there's the length? did you actually look at what's being written? and read the serial code too you should be able to clearly identify what each byte is in a hexdump i called it name rather than command bc it's constructed from M::NAME but will change it back ok : @draoi pushed 1 commit to master: f604f0054f: net: fix command/payload naming and remove redundant buffer !list No topics i'm using this https://github.com/cargo-limit/cargo-limit definitely recommend doing: `find src | entr -r cargo lrun` in your compile window : gm : gm : @draoi pushed 1 commit to master: e71f8f5a82: channel: remove duplicate byte being written in send_message()... why `entr` ? isn't `cargo watch -- cargo lrun` working fine too ? hey I won't make it to the dev meeting today, but if you guys need anything for the testnet I'm keen to help, like with testing/trialing it nice cargo watch is cool brb (restart) b o/ : gm : gm arm0503.7a891b0b: i tried cargo watch, but imo entr is better find src/ | entr -rs "clear && cargo ltest" : `entr` is 'c', this is Rust, I'd like to 'oxidise' my setup properly ;) : https://github.com/aag/eagle-eye : Title: GitHub - aag/eagle-eye: A file watcher written in Rust i turned off my internet and cargo watch gave me a load of errors that it could ping crates.io i prefer not to have telemetry everytime i run a compile !list No topics hey good few ppl are AFK today gm o/ anyone have stuff they wnna discuss? or postpone till next week : gm : yo : gm everyone : gm just wanted to know when do we plan on leaving ircd?? yeah since no topic added, postpone to next week? airpods69: kinda started migrating already but there's a few bugs/ weird behaviors am looking into rn so still on ircd till that's resolved ah fair, I'll get the PR #253 done and merged then help out with janny work if I can. (best way to learn ig) nice ty ty : !list ok so i guess that's it frens : hey the meet bot is still on ircd a hi-bye meeting : but there's a few ppl AFK so we're considering postponing till next week : unless anyone has something they want to discuss : ah ok, ty re: meetbot : have question, is project looking to hire new devs? : yes sir : maybe you saw already https://darkrenaissance.github.io/darkfi/dev/contrib/contrib.html : Title: Contribute - The DarkFi Book : thank you, have read the contrib and looking into the TODOs, is it okay to post questions in this channel during the week? : yeah for sure : welcome : nice, thank you, that is all I have for now. : :) : \o/ : fyi we're kinda between here and ircd at the moment : i'm still fixing some net stuff so we have both running rn : but should be migrating fully to here soon (TM) hey : draoi could you see my 'test' on ircd#test? unsure if I have connection issues i was keep in dev : np hey frieren-elf64 : which chan HCF? : #test *i was deep in dev : actually I can see I have a broadcast error in my logs : i'm not in that chan tbh just checking, does the mirror bot work both ways? is darkirc seeing this meesage? no : but possibly you're not connected since the seeds may be down (already migrated to darkirc...) : let me share a peer : ah ok : peers = ["tls://acab.accesscam.org:25561"] : sure I just have lilith0 and lilith1 so that's probably it so worries frieren-elf64 meet was posponed anyway ah kk ACTION goes back into the matrix godspeed o/ !end Elapsed time: 28583418.6 min Meeting ended XD long meeting 54.3 years lol hahaha that's unix epoch test test back : ty, got it sorted : nice see ya : @rsx pushed 5 commits to master: 6d7bbd3ed7: wallet: fix SceneGraph::rename_node() function which wasn't renaming parents and children info fields : @rsx pushed 5 commits to master: 616735c0a3: wallet: bugfix .unwrap() causing crash when text obj is an empty string : @rsx pushed 5 commits to master: 9a90542009: wallet: add is_visible property to objects : @rsx pushed 5 commits to master: 7e9fba23fb: wallet: create a chatbox : @rsx pushed 5 commits to master: 2f58097787: wallet: make a more powerful property system is it normal to keep seeing "[ERROR] net::p2p::broadcast_with_exclude(): P2P::broadcast_with_exclude: No connected channels found"? airpods69: check out bin/dnet/, it's a viewer to see your node state. you might need to setup the config https://agorism.dev/uploads/dn.txt frieren-elf64: lilth is the only one on it and its offline welp... I'll debug this later, doesn't really bother me, I can just hide it z on tmux hah ah well... I think I messed up my PR ;-; welp, eh I'll fix everything and make a different PR later on. 4436 lines of horror muahaha (just for the jokes after I messed up, actually shouldn't cross like maybe 120 once Im done) anyways question: I just want to confirm did I use the function find_zs_and_us the right way? link to said line (it is still incomplete) https://codeberg.org/airpods69/darkfi/src/commit/307df5312bdfcf5dfb661fd7bd5ecc9f8c6289d8/src/sdk/src/crypto/constants/fixed_bases.rs#L263 also if I am using it the right way, is it supposed to take 60+ seconds to generate values for u and z? Doesn't feel right... test smt if someone wants to see what I am seeing afk it would be nice to have a high-level view of a smart contract life cycle from contract writing all the way to deploying it and invoking its functions. i would volunteer to create it if i could actually figure out the whole thing... it's probably self-evident for you guys but not that so for newcomers like me gm godel: someone made this before- https://odyslam.com/blog/darkfi-smart-contracts/ doesn't rly go into life cycle tho iirc : sup yo airpods69: it seems correct, i didn't test the fn myself so if it takes a long time, we should think of another way to do this maybe this data should be distributed with the contracts. how large is it when you serialize everything? (or you can just calculate it) we do this already, see for example: darkfi/src/contract/dao/src/entrypoint/mod.rs:71 using include_bytes!() in the wasm we should then bcos of this, provide a large number of pregenerated points too frieren-elf64: if I use num_windows=NUM_WINDOWS then I'd say about 70 seconds, didn't time it properly. But if I use num_windows=H then about 30 seconds (constants defined in fixed_bases which is also the one being used in other structs too, still gotta make sure what it exactly is for) gotta calculate the size. I'll do it once im back. afk for a while from laptop, using my phone here : @rsx pushed 1 commit to master: 8332a3648c: wallet: update python API for new property system rewrite nw cya gm : @rsx pushed 1 commit to master: 1997292b36: wallet: improve methods, by making use of queues. signals now pass data too. : @rsx pushed 1 commit to master: 8b1fddacb3: wallet: add create/delete_mesh/texture() fns hey dasman, around? draoi: hey what's up? ah hey just looking into this CTRL-C issue observing something kinda weird wondering if you also observed this what is it? haven't figured out exact steps to reproduce so just running for different intervals and it doesn't always happen basically, normal shutdown sequence we get an info print like "Received shutdown signal" and then we trigger the p2p.stop() and i have a load of print statements showing everything stopping from what i've seen, in the case of an abnormal shutdown (CTRL-C lag) none of these print statements get triggered it just hangs for some secs with no output, then stops (with no output) however you said before you noticed it was catching on slot.stop() so was wondering if you observed something different? ha, I haven't observed such thing Or didn't pay attention just to be sure, when you noticed the slot.stop() lag, you can print statements confirming that? or how did you triangulate s/can/had cos if so maybe i'm observing something else I used instant and elapsed around each .stop() ahhh nice one any chance you have the log output saved? no sorry :( np tnx for info happy to help this is good, even if lacking some of the info i want and having a couple errors. if i come up with something more to the effect i have in mind i will open a pr to the docs daoi: thank you! hey yo okay so fixed_bases: generate_zs_and_us() takes 18.19 seconds to run the generator function when num_window = H (and H = 8). frieren-elf64: about distributing the data with contracts, I probably am not the best judge for that so its best if you guys discussed about it amongst yourself (I listen and learn :D). yeah we should ask brawndo when he's around what is the best solution but would be good to know how much data is involved since if it's too big then it's expensive to distribute onchain and we can look into pregenerating a bunch of points .etc oh yeah my bad UserDefinedConstant { G: pallas::Affine, u: Vec<[[u8; 32]; H]>, z: Vec, } This is what you would be looking at. Its the same as other structs which goes into the heap from the hardcoded values depending upon the option G gets generated from `pallas::Affine::from_xy` and used to calculate u and z (which I believe is the bottleneck now). what is the len of z? G = 64 bytes, u = 32*H, z = 8*n so total = 64 + 32H + 8n which is not too bad actually ok off to chill, gn alrighty same here, gn. :D gm greets o/ : gm gm gm bbl quit hey : @skoupidi pushed 1 commit to master: 32fcb6e6df: drk: store proper information for token mint authority hello Hello I was having some trouble connecting to darkfi for the past few weeks so apologies if people got flooded with posts from me hahaha Adding the peer connection that was listed in the telegram chat fixed it for me. Yeah we're slowly going to deprecate the infrastructure holding some ircd servers and move onto darkirc what is the difference between the current setup and darkirc? gm fatback: the main difference is darkirc retains messages for a peer when the peer node is offline (current time is 24hrs) using Directed Acyclic Graphs. I was trying to run darkfid (new default config), are the lilith seeds down? (lilith0 gives me a timeout and lilith1 refuses to connect) hey : gm nighteous: Do you mean darkirc? brawndo: nope, I meant darkfid. I was checking out the book and tried to run a node. We'll be deploying a new testnet very soon upgrayedd is just finalising some stuff in the wallet oh okay then But speaking about darkirc, everyone should update it to latest git master yes i'm also working on 1-2 improvements to net rn hey upgrayedd: so to mint a token, you must called token_mint() which requires an auth_contract (in the token attributes) the reason there is a separation between token_mint() and auth_token_mint() is because auth_token_mint() is the auth_contract that does the same logic as the old token_mint() (written by bra*ndo) which the old logic was: use a keypair to mint tokens but we actually want to allow tokens to be minted programmatically, for example emission by a protocol so the new token_mint() (in token_mint) has the stuff with parent_func_id that you see, which means: "token_mint can only be called by the function specified in the token_attributes for that token" so if i mint a token with token_auth_parent = foo::bar(), then only foo::bar() can call token_mint() for that token to get the old token functionality, we parent_func_id = money::auth_token_mint() btw thanks a lot for reviewing the code, i greatly appreciate the extra eyes and happy to answer Qs hey we're not tagging commits yet right like we discussed 2 weeks ago don't see @bugfix or similar on recent commits : @draoi pushed 1 commit to master: 39350f8740: outbound_session: start() and stop() slots concurrently... We can start doing that after release When it is relevant ah kk !list No topics !topic zk: distributing arbitrary points data with the contracts Added topic: zk: distributing arbitrary points data with the contracts (by nighteous) gotta look into this cause generating z and u using find_zs_and_us is slow. hello all holla sweet! excited to be here. thanks for having me. huge s/o to Varoo & skou pidi welcome have fun really a big newb for all things it, security & privacy, but willing to learn and support the causee that you guys are involved in. this is the way! :)) : @dasman pushed 2 commits to master: 39bd52c985: bin/darkirc: ignore CAP END if user is registered : @dasman pushed 2 commits to master: b03ae05813: bin/darkirc: fix multiple join msgs in IRC client + NAMES list brawndo: getting NAMES list after getting history :D : @dasman pushed 1 commit to master: 71b2998e3a: bin/tau: flush sled db and stop dnet and deg when caughtung termination signal If i see a link that 404's on the wiki, whats the best way to inform and get it updated? I was going to comment it, but not sure if thats the best way : @dasman pushed 1 commit to master: f93145a05a: tau: remove dummy info logs anon: create a PR on https://codeberg.org/darkrenaissance/darkfi : @dasman pushed 1 commit to master: 047a0e4bb7: bin/deg: simplify detecting merges and forks in graph column gm : Hello, inside src/validator/consensus.rs, within the unproposed_txs function, there's a 'TODO' comment indicating a need to replace TXS_CAP with a gas limit. Anybody know if we already have an algorithm or definition for determining the appropriate gas limit to compare against in this context? hi o/ : gm gm alkaloid, are there darkirc nodes i can connect to over tor? trying to get setup dasman: ah nice improvement You should carefully follow the protocol, don't just test with weechat btw : gm : calm: We don't have a set gas limit yet : calm: We first have to choose a divisor for the gas units : i.e. 1000 gas units should cost 0.000000001 native token : https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/validator/verification.rs#L515 : Title: darkfi/src/validator/verification.rs at master - darkrenaissance/darkfi - Codeberg.org : brando, thank you. Will spend time studying fn verify_transaction. Are there any dependencies that must be resolved before choosing the optimal gas unit divisor? Can we use a constant or configured value to proceed with coding, with the option to configure the divisor value later without affecting the implementation? : It's mostly a question of token engineering : However I think we should just start with a constant value, say: fee = gas_units // 1000 : Right now, fee = gas_units : But that's too expensive, meaning a lot of tokens have to be spent even for a basic transfer : In src/runtime/vm_runtime.rs we also have a GAS_LIMIT definition, which we might eventually want to tune, based on what the contracts are doing generally : But for now the 4M limit seems to work : err, 400M : @rsx pushed 1 commit to master: 8be3db21a4: darkirc: update java version for building android binaries : okay, thank you. In the context of fn unproposed_txs, is the gas limit we need to calculate analogous to Ethereum's block gas limit, or conceptually similar? : Yeah so the limit is introduced in order to prevent the WASM entering an infinite execution loop : Once the limit is reached, the execution will quit and fail : okay, appreciate your time, will keep that in mind. : On another note, created an account on codeberg and noticed the pull request button is disabled. At the same time, it is enabled in github. Should we start buy doing pull requests in github or is there a permission / configuration that needs to be provided to do pull requests in codeberg? : hm that sounds wrong : They're enabled : okay, ty for letting me know; will look into Codeberg account settings. Heard about Codeberg today, seems like a good repo hosting platform. : nice thing about it is you can create accts over tor : maybe you saw already https://darkrenaissance.github.io/darkfi/dev/contrib/tor.html : Title: Using Tor - The DarkFi Book : draoi, gm to you as well. Very cool, thank you for sharing, will setup codeberg accordingly. Good to hear project made a good choice with the repo tech. : @skoupidi pushed 1 commit to master: 4101f8b608: drk: tx history handling cleanup hey frieren: need to update to the latest version and will share the hostnames seeds=["tor://cbfeuw6f4djcukyf3mizdna3oki6xd4amygprzdv4toa3gyzjiw6zfad.onion:25554","tor://6pllu3rduxklujabdwln32ilvxthvddx75qg5ifq2xctqkz33afhczyd.onion:25551"] < these two are now on v0.5.0 : @skoupidi pushed 1 commit to master: d9dbbc18f6: doc/arch/p2p-network: broken link fixed gm frens gm gm : @dasman pushed 1 commit to master: faf8b80e36: event_graph: remove unused code : @dasman pushed 1 commit to master: 13ea9a5ffe: bin/deg: update README.md hmm it's happening again [ERROR] [P2P] Broadcasting message to tcp+tls://151.80.214.233:60886 failed: Channel stopped [ERROR] [P2P] Broadcasting message to tcp+tls://irc1.dark.fi:26661 failed: Channel stopped [ERROR] [P2P] Broadcasting message to tcp+tls://151.80.214.233:51894 failed: Channel stopped The node is trying to communicate with itself [INFO] [P2P] Requesting addrs from active channels. Attempt: 2 This info and then those errors [INFO] [P2P] Requesting addrs from active channels. Attempt: 2 [ERROR] [P2P] Broadcasting message to tcp+tls://acab.accesscam.org:26661 failed: Channel stopped Happening on another one as well ok, looking into brawndo: you're saying that these addrs are the nodes own external addrs? Yes, the actual domains are set as external addrs The IP is the IP of irc1.dark.fi hoth ~ % host irc1.dark.fi irc1.dark.fi has address 151.80.214.233 ok ty I'm assuming there's no DNS resolution implemented, so if you have a domain as extern_addr, you could still end up with your own IP in the peerlist But just resolving domains via system isn't ideal, since someone might want to use Tor - then resolving via system would leak the connection But perhaps we can make a way to resolve domains over the set transport, provided the node has that transport enabled That'd just be an abstraction over SocketAddr or something https://docs.rs/arti-client/latest/arti_client/struct.TorAddr.html ah i know what's happening i think it's connected to when we ping ourselves, though need to confirm that Why do we ping ourselves? see refine_session.rs:361 or actually the docs from line 333 we do this to ensure our addr is actually reachable before broadcasting it to others hm yeah I'm not sure if this is a good idea This will always create 2 (or 3?) channels yes i think 2 Even if your external addr is misconfigured, it should be pruned by people with time, no? yes e.g. they'd add it to the blacklist because it's unreachable So perhaps it's a non-issue they should insert it on the greylist first, then realize it's bad and delete it Right The only issue is if it keeps appearing we keep broadcasting our addrs But it's a double-edge sword. Some legit peer might go offline and become unreachable, and if you ban it, they won't be able to connect when they come back online. so they will keep being added to the greylist and deleted rn we're not really banning nodes Yeah I was thinking if it happens too many times you just refuse it But I guess there's no point to that probably we should avoid writing RefineSession or SelfHandshake connections to p2p.channels() anyway which is an easy fix we can also remove SelfHandshake i'm semi ambivalent, maybe haumea has thoughts Maybe add it as a Monday topic? ++ !topic p2p self-handshake and its bugs Added topic: p2p self-handshake and its bugs (by brawndo) Fun how a lot of bg async stuff causes race conditions haha yeah Stupid computers :D X XD gm : gm : o/ : gm : @dasman pushed 1 commit to master: 1377198b03: eventgraph: [WIP] add initial eventgraph replayer code : @dasman pushed 1 commit to master: 957388e3b0: bin/deg: better handle when dag is empty hey : @dasman pushed 2 commits to master: d250c22196: bin/deg: add eventgraph replay mode : @dasman pushed 2 commits to master: fae90c18f7: bin/deg: increase RPC buffer limit gm gm o/ : gm !list Topics: 1. zk: distributing arbitrary points data with the contracts (by nighteous) 2. p2p self-handshake and its bugs (by brawndo) almost had a mini heartattack that the meet started lol (I lost track of time for a second there) (got notified and I was like wait what, is it time already, where did the day go?) i was reading the topics you can do that any time using !list meeting doesn't start until someone calls !start (at 3pm UTC) ah fair okayy gm : gm : o/ : gm : gm : gm everyone : ohayou : !list : oh it doesn't work here : Yeah I'll move it once we migrate here : ah fair. How do I edit a topic? I wanted to change 1. and also include a discussion to see if the approach I took for adding arbitrary constants is overengineering it or not. The reason being that I have crossed 300+ lines of changes. That amount gives me an eerie feeling that maybe a simpler approach might exist. : deltopic : And then add the new one deltopic 1 !deltopic 1 Removed topic 1 now that I think about it, two seperate topics would have made more sense !topic zk: distributing the data for arbitrary points with contracts Added topic: zk: distributing the data for arbitrary points with contracts (by nighteous) !topic zk/zkas: discussion on the approach for addition of arbitrary points to zk/zkas. Added topic: zk/zkas: discussion on the approach for addition of arbitrary points to zk/zkas. (by nighteous) : thanks : np !topic darkirc migration Added topic: darkirc migration (by brawndo) : @skoupidi pushed 1 commit to master: 9e71055a9d: drk: fixed token minting and added its fee call gm gm gm : gm hi : hi calm Hello : ACTION creates a portal : o/ o/ : gm !list Topics: 1. p2p self-handshake and its bugs (by brawndo) 2. zk: distributing the data for arbitrary points with contracts (by nighteous) 3. zk/zkas: discussion on the approach for addition of arbitrary points to zk/zkas. (by nighteous) 4. darkirc migration (by brawndo) \o !start Meeting started Topics: 1. p2p self-handshake and its bugs (by brawndo) 2. zk: distributing the data for arbitrary points with contracts (by nighteous) 3. zk/zkas: discussion on the approach for addition of arbitrary points to zk/zkas. (by nighteous) 4. darkirc migration (by brawndo) Current topic: p2p self-handshake and its bugs (by brawndo) : elo Hey : We're on old ircd still o/ hey : oh okay about this topic: so perodically, we do a "self handshake" or version exchange with ourselves. this involves creating two channels, a manual connection with ourselves and an inbound connection via the acceptor. we then do a version exchange between these channels. : hello draoi we do this to 1) ensure our external addrs are valid before sharing them 2) couple the external addrs with a last_seen field that records when our addr was last reachable by us. we then broadcast this tuple of (external_addr, last_seen) to other peers in protocol seed and protocol addr. because channels created by Self Handshake (like the refine channels) disconnect quickly and are pruned, in channel.rs, we avoid printing channel send and recv errors if they come from one of these sessions. however rn p2p.broadcast() can currently trigger a "Channel Stopped" error when we broadcast msgs to all channels() around the same time a connection to ourselves disconnects. to fix this we can either: 1) avoid printing Channel Stopped errors if it's a RefineSession (since this is expected behavior), 2) avoid adding RefineSession connections to the p2p list of channels(), 3) rm the SelfHandshake regarding the latter, we could make other peers checks our external addrs instead by simply sending our external addr without knowing whether it's valid and allowing the refinery of other peers to process it, however we need to decide what the last_seen should be (for example it could be set to 0 which would give it priority in the receiving peers greylist). ok my thinking is that we should avoid the node doing broadcast() trying to communicate with itself i don't think self ping is needed. if it's needed, then do it once and never again The reason this is happening is likely a race condition I believe the self-ping to validate external_addr is not absolutely necessary Even if it is misconfigured and broadcasted to others, they will eventually prune it because of not being able to connect so the self-ping is to like 'clean' the external addrs before broadcast? idea is to reduce network traffic and compute overhead for peers? HCF: no it just is to make sure the configured external_addr is correct HCF: Yeah the idea was to make sure external_addr is reachable yes and to couple the addr with a last_seen field rather than putting an empty value like 0 ok ty Why is last_seen needed for self? is the 'empty' case something that can be resolved with Rust types rather than trying a connection? brawndo: because AddrMsg is a tuple of so when we are broadcasting our addr, we must put some value there It should just be the current time At time of broadcast ok but generally last_seen means last reachable You shouldn't trust that number coming from an arbitrary node ^ agreed Every node would maintain it for itself true, and we don't they do (other nodes) So you can just put current time when sharing it (Sharing your own external_addr) yes, we can do that if it's preferred ++ only if not doing self ping I wouldn't do self-ping either 'it could be set to 0 which would give it priority' this is a bit concerning to me. it seems easy to spam this and confuse peers HCF: we're saying to set it to current time The time you're sharing doesn't matter however we still need to keep in mind the hostlist ordering ++, just wondering if the priority thing already exists The receiving node will (eventually) attempt a connection they are ordered by most recent timestamp does monero do self ping? And know for themself yes we took it from monero ah i don't think it exists in bitcoin It's a bad optimisation IMO We could implement a DNS resolver ourself and use that to determine if the external_addr is "valid" You only need to do that when (re)configuring the p2p instance and maybe drop the nightly Ip from rust::std::net and go stable XD No that's needed to filter non-public ranges I can only dream then XD nightly dreams XD :D seems like we have rough consensus here next? i'm just re-reading the above ok I think a bad external_addr is a non-issue as it should be filtered over time my impression is that dropping the self-ping makes sense. agreed that it's a race condition and maybe something we can avoid i think p2p.broadcast() probably should only touch manual, inbound and outbound, right? but will defer to others because I've not been thinking about it as long frieren: It uses an inbound slot i think brawndo means broadcast() has a race if it sends to channels that were stopped ahh ic That's the issue as it opens 2 channels And then broadcast() picks them up yeah seems silly to try to broadcast to yourself hmm tricky, draoi maybe ask in #bitcoin-dev or #bitcoin-wizards on Libera about bitcoin p2p design, and check libp2p / others as well So, could we just remove this all and then figure out again if/how to validate external_addr in a better way? sure i can do a bit of research and remove well maybe better to get a better sense first of what others are doing before doing any drastic moves ok but i'm in favour of removing too it just doesn't seem needed. if i configure an external_addr then i expect it to be correct, and i don't want my node to verify that for me i always think of that meme of a drolling wojak trying to plug something in to his head (only if it's cheap/easy and not expensive) lol yeah it's a bit like that !next Elapsed time: 19.0 min Current topic: zk: distributing the data for arbitrary points with contracts (by nighteous) nighteous: here? nighteous was on darkirc just now for this frieren and brawndo have to decide whether or not to distribute the arbitrary points with contracts or not oh yeah I was typing, here just getting down all of the details for a quick recap ok waiting for more elaboration brawndo: could we do like zkas_db_set()? and then remove all presets, and just have money deploy all the points we use in src/contract/ nighteous: customary to explain the problem, not just for me/brawndo but others sitting in listening I'm not fully clear on what is needed oh yeah sorry! So on running find_zs_and_us for calculating us and zs from the Generator value which the user defines in zkas, the calculation of the values is slow (in the ball park of 17-20 seconds) what's the size of the generated data? so we know the overhead for adding them to contracts 17-20s ain't slow frieren did send a message which is this when we last discussed: "G = 64 bytes, u = 32*H, z = 8*n so total = 64 + 32H + 8n" for the size of calculation what is H and n? upgrayedd: It is very slow lol H is a constant defined in constants.rs which other structs were using. (H = 8) n not so sure, you wrote it... what is the length of z? you can see these in the code len of z = 8 H: usize = 1 << FIXED_BASE_WINDOW_SIZE; FIXED_BASE_WINDOW_SIZE = 8 s/8/3 I think it can be bundled no prob The bigger issue is that likely the constants logic has to be rewritten in order to support this nighteous: look at the code, darkfi/src/sdk/src/crypto/constants/fixed_bases/nullifier_k.rs:21 as an example G = 32, Z = 8n, U = 32*H*n what is n? you should've come to this meet with that info what is the overhead in terms of bytes this file is cursed lol. I think we should have some documentation about where these numbers come from it should be the num_window size which is 8 in my case (not saying that the implementation is bad, just hard to grok) nighteous: so what is the number of bytes in total for all the data? frieren: 64? (8*8) oh wait HCF: https://github.com/zcash/halo2/blob/7df93fd855395dcdb301a857d4b33f37903bbf76/halo2_gadgets/src/ecc/chip/constants.rs#L117 all of the data ty yes all the data! 32 + 64 + 32 * 8 * 8 = 2144 brawndo: yeah so basically the DB becomes the main mechanism and we remove the hardcoded constants Yes that's what I meant by rewriting the constants logic ok so 2.1k bytes overhead per constant deployed isn't that quite a lot of data? It still needs to exist pregenerated because of performance overhead Also there will be overhead in retrieving it from the db what about pregenerating 500 constants? frieren: Up to you. I've always been on the side that the constants we have are enough they aren't, and additionally you have constants that cannot be used in different ec ops I'm with brawndo on this, overhead is too much just to cover a very specific niche so in several places i had to make workarounds it's not a specific niche, it's insecure to reuse generators ideally each domain uses a separate generator, but we are sloppy reusing generators in multiple places simply because we don't have enough and the ones we do have are not usable in different EC functions like ec_mul_short(), ec_mul(), .etc which is very limiting for example if you want to do a pedersen vector commit, right now it's impossible with our current setup hm ok then why not add the generators we need as constants? rather than db entries Yeah we should do that yeah so i'm saying we pre-bundle 500 generators frieren: That's still finite and a mediocre solution if custom/user deployed needs something more, they should bundle their generator as part of the init() contract data We should reimpl the constants logic a bit to support arbitrary generators and have a tree storing that And then they can just be included in the contract binary ok what if it's stored on disk as files instead of in a DB? It doesn't have to be stored why not allow contracts to add constants, and then other contracts use those constants? like a global generators tree? That's possible with what I'm saying Just provide the constants in the API like the SDK does Why are you trying to make this complex? ok but why it doesn't have to be stored? how can other contracts use those constants if it isn't stored? Because it exists as code Same thing as nullifier_k.rs for example oh yeah mean they also set them again in deploy()? *oh you They never get set They just exist pregenerated and provided by the contract through its API > frieren: That's still finite and a mediocre solution do you mean hardcoded like we have now? You're talking about pregenerating 500 constants for everyone I'm talking about each contract pregenerating what they want to offer yep me too, just wondering how they're accessed do you mean another function like metadata()? The same way they are accessed now in the SDK The constants API just needs some reworking for this do we have a call like zkas_db_set() to add the constant? or how does it work? another issue, there is ConstBaseFieldElement, OrchardFixedBasesFull, .etc but these should all be unified When the VKs are generated on contract deployment, they're just passed in as-needed This is done once per deploy so how could the DAO contract access the constants used by money? would it have to do anything? It'd use the money contract as a crate dependency and access the stuff through the provided rust consts In Money the generators would be in the public API ok nice, so you configure the ZK VM's constants like a hashmap basically in deploy You have to copy the data from WASM to the host, but yeah doesn't this require some kind of persistence of the config? It's little overhead since it's only done on deployment Then the VKs are generated and it's set-and-forget What config what persistence? I don't understand well we configure the generators, but then verification happens sometime later and those constants are used so those generators would have to be stored somewhere on disk it seems They're already in the VK We store the VKs ah that's nice You don't need a circuit to verify a proof Just the VK yeah that's perfect Is it clear now? yep \o/ ok :D 1. unify fixed_bases.rs so there isn't multiple enums 2. mechanism to init the generators used by ZKVM 3. move current constants to money? 4. zkas support Yeah I'd do the ZKVM last is that correct? ok The first should be moving the constants to Money, and then having that export working Because the logic in constants has to be rewritten, since we won't be hardcoding any anymore I can help with this if need be maybe the simpler tasks you can itemize for nighteous and oversee the harder parts I'm not entirely clear on how this will work in zkas But let's see we can have it the same as current where it's the constant name then it's just a map in the zkvm from constant name to the value Yeah but now we can't validate their names that's fine, the zkvm will fail then ok if you think so running the zk circuits now requires configuring constants (zkrunner) next? !next Elapsed time: 35.7 min Current topic: zk/zkas: discussion on the approach for addition of arbitrary points to zk/zkas. (by nighteous) I think this topic can be skipped now Yeah it happened :D ++ next then? !next Elapsed time: 0.6 min Current topic: darkirc migration (by brawndo) Should we attempt to do the next week's meeting on darkirc? Maybe start using that more why not? So we can start finding bugs if any ++ There's 1 lilith and a few public nodes deployed Just everyone needs to update to latest master, and use the seed in the config file sure wtm *wfm i'm working on the lag on CTRL-C i think we can go for it! msgs get echoed here anyway Then if it works for a week or two, we'll deprecate the old lilith seed and have that So the infra can be fully cleaned up and the DNS stuff can be migrated from my personal infra nice We fixed that version exchange bug too yep So new nodes will now properly reject old(er) onese yeah also dasman working on event graph replayer would be good for triaging issues That seems to be progressing afaict I didn't use it yet tho will be done soon (TM) sibe ah yes a classic lol dasman: don't forget to fixup clippy warnings, please and thank you there will be some updates incoming: CTRL-C lag (basically making the Connector prematurely stoppable, so we don't have to wait for the connector to fail on stop), fix/rm SelfHandshake yy ++ also upgrayedd flagged something kinda scary a while back, trying to reproduce draoi: can you remind me? I'm operating on limited memory draoi: Did you find out what's causing the hang? i think you said the net test was acting up occasionally, like not stopping/ aborting oh yeah it was stuck to some refinery endless loop brawndo: yes it's the Connector re: CTRL-C hang and was shut down from the ci agent for running to long lol aha So p2p.stop() needs to aggressively drop it? yes exactly ok nice i'm adding a Condvar so we just call notify() and return whatever future finishes first (the connector result or condvar.wait()) quick instinct prediction: node1 connects to node2, you call stop on both, node2 stops before node1, node1 goes into retry/refinery and keeps retrying to connect to node2, therefore blocking its own stop it's a bit tricky cos seedsync operates a connector but isn't really owned by anything/ is controlled statelessly in p2p.seed(), so i need to move some stuff around yeah that's scary upgrayedd will look into everything should be a stopable task so main can controll all underlining threads/tasks windows-task-manager.exe :D !topic drk updates Added topic: drk updates (by upgrayedd) ok !next Elapsed time: 10.3 min Current topic: drk updates (by upgrayedd) draoi: anything else to add? should we move? all good Yeah we should just start chatting on darkirc :) ok will do ok just quick update on drk: transition to new api going smoothly, fee call added and working as part of block rewards remaining money calls to "fix": token freeze and otc when there are done all basic money calls should be operational for general testing general as in not me localtesting XD very nice, we're working on the UI too will start with the chat imho drk cli is more of a priority actually we can put it in the web browser if there's a community run server people can connect to The IRC? Probably, we need to test multiple clients a lot more I wrote it with that support in mind, but it's largely untested anyway thats all, shall we move/end? cool frfr !next Elapsed time: 5.9 min No further topics !end Elapsed time: 0.0 min Meeting ended hello, have a quick question: completed contrib/tor.html settings. Anybody have any insight on reasons why "New pull request" button is disabled in codeberg? gg everyone screenshot: https://codeberg.org/calm/share/src/branch/master/new-pull-request-button-disabled.png calm, I just used it (didn't make a PR though) so I dont think it is disabled... weird calm: maybe you need to click fork? oh yeah, gotta fork it and then push changes to your forked repo, then there might be a merge button on the top thanks everyone o/ thanks all! calm: Try the fork and let's see Thanks everyone, very good mtg ty, I'll look into that thank you everyone nighteous: I'll have a look at this constants stuff and then we can chat a bit about it tomorrow Is that fine for you? brawndo: sure thing. Also if there is any prerequiste that I am supposed to know, I'll read about that. : @rsx pushed 1 commit to master: c21773be28: system/condvar: add a unit test to make sure condvar can be double awaited. frieren: remembering some time ago had some issues with the stopable task if the task is finished and you call .stop() it hands there waiting endlessly or something along those lines that was fixed iirc oh ok :D darkfi/src/system/stoppable_task.rs:109 self.barrier.notify(); (see the comment) nighteous: Not right now, will have a better picture tmrw alrighty The task needs to be removed from every scope to be stopped Otherwise any Arc will keep it alive Also applicable to StoppableTask::stop(), it might quit the detached task, but the Arc will still exist in any scope it's referenced the data is quite small but yeah in general if the arc is kept alive then that's a leak ++ hi everyone, was able to create a fork. Can a code reviewer or anyone else please provide explanation of the code review process for a forked repo and how approved updates would be migrated over from https://codeberg.org/calm/darkfi to https://codeberg.org/darkrenaissance/darkfi? calm: you open a PR as usual, and someone will review it or discuss it here thank you : @dasman pushed 2 commits to master: 2fc118af51: eventgraph: rewrite eg replay functions and actually log the events : @dasman pushed 2 commits to master: 4186860fe5: bin/darkirc: RPC method to recreate DAG from replay log upon request from DEG : normal use: ./deg : replay mode: ./deg - r : oops I meant: ./deg -r : so darkirc node is running it logs the operation on the dag, specifically "insert", when running deg in replay mode, the node recreates the dag from that log, and sends it to deg tui to browse : @dasman pushed 1 commit to master: 96a16b0de1: bin/deg: remove forgotten unnecessary prints xit exit gm gm gn : @dasman pushed 1 commit to master: b13e3b81ac: bin/deg: update requirements gm gm : gm : hey : @rsx pushed 1 commit to master: 3b4a64c054: system: StoppableTask, add non-async stop_nowait(), and make Drop call this automatically so tasks are auto-cleaned up on exit. : ^ StoppableTasks now auto-stop on Drop : nice : https://discuss.libp2p.io/t/how-we-discover-our-own-diallable-ip-address-and-port/219 frieren here? hey got some time? discuss random stuff yes noice, all good otherwise? yep ;) :) ok lets get to it in money contract I see some asserts in the wasm code, which I'm gonna remove it should return an error instead in src/contract/money/src/error.rs I see some coments about removed codes, why weren't rest codes updated and the comments left there? minor just asking to know if there is specific use case of the code numbers that couldn't be updated at the time or just classic lazyness i can fix that I'm doing it no worries its just they were deleted it but numbers weren't updated vim has a shortcut (ctrl-a) to increment numbers kk yeah just wanted to verify thats the case ok garbage things out, should we move on the auth token stuff we discussed in the past? yep ok let me grab some notes so the main change is that token id is derived by the mint authority, which derives by some combo of secret key, token id blind and a function id, with the rational being that function id can be more that money::auth_token_mint(), in order for it to be versatile and allow other contracts(like dao) to be able to mint their own tokens, correct? no, you are mixing 2 things the token itself does not have a mint authority, it has an auth_parent, which is a FuncId (hash of FuncRef) money::token_mint() can only be called by auth_parent function oh so token_mint is common for everyone, its the auth_parent that changes, and thats defined in the token_id/FuncId the old token minting logic made by braw*do is the mint authority .etc, so it now got moved to auth_token_mint() yep correct ok gotcha now, so the logic pretty much is: as long as auth parent is the correct one(and passes), mint this token s,token,coin then the auth_token_mint() uses the user_data field to store whatever specific data it needs correct maybe the naming is bad nw I don't care about naming as long as its descriptive :D so for example you might have a token minted by a protocol, or a DAO .etc which should define its own auth_parent call right? that's why token_mint is designed this way yep noice now I get the split good design choise :) ty ok now to the nitty gritty stuff TokenId is derived as the poseidon hash of the TokenAttributes{auth_parent, user_data, blind} user data is derived as the poseidon_hash of the secret key public coords user_data is specific to the auth_parent contract the question now is, are TokenAttributes confidentials(like the secret key) or public values that can be shared around securely? the money::auth_parent or all auth_parent calls? > upgrayedd │ user data is derived as the poseidon_hash of the secret key public coords this is only for auth_token_mint() and not in general so just money::auth_token_mint ? yep, you can put anything you want in user_data it's up to the auth_parent contract how this field is used aha so since its specific to money, those TokenAttributes should be confintential, no? about tokenattributes - it may or may not be public, but the default token_mint() assumes it's private but the token_auth_parent is public the token_user_data is private yeah right now I'm talking money, if someone else needs them to be public they should handle accordingly yeah token_auth_parent is the func id, so its always public yep but the TokenAttributes don't have to be, since the token_id is derived by the poseidon_hash so its safe to asume them as fully confidential, no? (blind should also be assumed as private) however for auth_token_mint(), the attributes aren't private the user_data is public too does it have to be? I mean I don't see where its "revealed" as public it doesn't can you point me? yep darkfi/src/contract/money/proof/auth_token_mint_v1.zk:38 user_data = hash(mint_x, mint_y) and both mint_x, mint_y are public ok I see, and thats a good segway to what I wanted to ask in the first place if TokenAttributes can be assumed private, it doesn't make sense to derive the user data in auth_token_mint every time you can just pass the full token attributes, derive the token_id and constrain just that like skip: 34-42 lines well the user_data is a single value but we want to store multiple values unless I'm missing something so it must be a hash of (a, b, c) where those are the values you use for example one contract might have 0 attributes, another might have 5 aha so its more like a full ownership proof of the user data deriviation so the user_data is a single value in the token_attributes otherwise anyone holding the token attributes should be able to generate a mint proof, since they can derive the token_id directly but don't proof full ownership s,proof,prove it's like token_mint() does not have knowledge of auth_token_mint(). it doesn't know if it needs 0 or 1 or 5 attributes and there's no way in zk to have variable arrays in a hash so instead we just store a single value, which is a hash of the attributes used in auth_token_mint() I'm asking a different thing tho did i understand the q properly? ah ok what I'm saying is: since we can consider TokenAttributes{auth_parent, user_data, blind} private, why do we have to derive user_data in the zk proof and constrain it, since we can just directly derive the token_id from the private data pair(poseidon_hash(auth_parent, user_data, blind) and constrain just that like we do in token_mint_v1.zk hence why I mention full ownership proof can you give a code sample for auth_token_mint.zk for how you would do it differently? add token_user_data allong rest TokenAttributes(lines 22-23) and ditch lines 34-42 all three TokenAttributes are assumed private info, so derive the token_id and constrain just that but again this is not full ownership proof, aka prooving you have the original mint secret key, you just proove you know the minting TokenAttributes yep your scheme could also work this is what braw*do wrote, i just ported it to this new system so its more of a: how formal do we wan't our proofs to be it is as you say deriving the user data by the secret key is the full formal ownership proof way you could do it based off the token attrs being private, or based off a keypair I reckong full formal is/should be the best option in terms of proof completness maybe your way is simpler while what I'm proposing is more of an optimization since proof is smaller and its only applicable since in money::auth_token_mint we can safely asume TokenAttributes as private you mean token_mint(), but that's correct yeah XD anyway we can think/discuss pros and cons of each approach and do it in the future, doesn't have to be changed now, as current setup is the full ownership proof, which is the proper/formal way imo next? yep sounds good okay another optimization I see we are deriving and constraing the coin in both auth_token_mint and token_mint why is that? since calls are dependent on each other, only one of them needs to do it, and the other grabs the coin from them hmm good point lol, why are we doing that? you tell me XD i think it's copy-paste error we also don't need the value commit you need the value commit minted coin value is not public iirc why do we need that for? we aren't spending it, just minting it actually yeah, we also don't really do anything with it in the wasm/rust code nice! we can delete all that so based on your design auth_token_mint should be the full ownership proof along with the coin constraing, aka lines 34-56 we don't need the coin and token_mint should just constrain the token_auth_parent, to verify its the correct one oh constrain it in token_mint then? other way around yep token_mint constrains coin, but auth_token_mint does not ok then in token_mint, do we really need to constrain the auth_parent? yes we do since we derive the token_id by it, then the coin, and constrain just the coin coin constrain also verifys the auth_parent, no? no, it doesn't ok maybe I'm forgetting something darkfi/src/contract/money/src/entrypoint/token_mint_v1.rs:58 this is how the parent caller is constrained but we can remove coin and value_commit from auth_token_mint() because they aren't used and we check the coin's correctness in token_mint anyway ah yeah we need to constrain it since its grabbed by the parent call good catch noice ok so proposed changes are: in auth_token_mint remove 45-63 since the coin is constrained in token_mint ++ gg, we made the proof like half the size with just removing "garbage" lol yeah nice, tyty ok gonna do those changes I think thats all we can discuss the change from full formal to semi formal(token attrs discussion) in the future oh wait there is more if we remove said lines from auth token mint, then the proof is identical to token_freeze can we also ditch that? freeze can use the same auth proof ah yeah strange it's indeed the same proof freeze doesn't need to be a two-call like mint, so just reuse the same proof yeah it seems like the correct thing to do, just it's technically different uses wait tho if a contract wants to freeze its tokens, with current design it can't so it might makes sense to also make freeze a two-call one [auth_parent, freeze] token_freeze is for auth_token_mint (bad naming) where freeze just constrains the parent_func, like the start of token mint it's not for token_mint() darkfi/src/contract/money/src/entrypoint/auth_token_mint_v1.rs:89 you confused me apologies look at freeze it doesn't work for token not using same auth as money since its ownership proof is the same as money auth mint so for example dao can't freeze its own tokens, if its user data is not poseidon_hash(mint_x, mint_y); yeah as i said, token_freeze() is misnamed, it's specific to auth_token_mint() ah that what you meant? maybe it should be auth_token_freeze()?? i couldn't think of good names for these fns well the naming is correct the impl is "wrong" token_mint and token_freeze should be the generic calls, operating against some auth parent, like token_mint is doing right now the impl is fine, if a protocol issues tokens, then it should make a contract for freezing i mean it should build its own freezing logic all these coins/tokens are part of money contract only money can freeze them so it should be supported by money like token_mint is right now if i make a custom token, i can add freezing logic there or whatever rules i want to allow or disallow minting of new tokens (it's in the auth_parent) let me think then you must also keep your own freeze tree/flag that's good since each token has its own issuance logic aha thats why you are saying the correct name should be like token_money_freeze or auth_token_freeze so we are defining the freeze logic of the money contract doesn't matter its the native one well maybe auth_token_freeze() auth impliess it can't be used elsewere, so I would avoid it it can't be used elsewhere we can just leave it as token_freeze(), I don't think its that confusing it's specific to auth_token_mint() aah you are sying you are proving the authority to freeze which happens to be the same as token_mint hence the auth_token_freeze naming upgrayedd │ which happens to be the same as token_mint should be: upgrayedd │ which happens to be the same as auth_token_mint yy that what I meant, don't throw my lazyness against me XD ok ;) just being correct with language to avoid confusion yeah true my bad anyway that being said, I reckong we the auth_token_mint changes we can ditch freeze proof and reuse that in the auth_token_freeze call ++ will wait to see if brawndo has any input otherwise bloat be gone thx real good chat yo hey there : @skoupidi pushed 1 commit to master: 0008360eb2: contract/money: replaced entrypoints code asserts with errors upgrayedd: Yeah so the freeze proof is only supposed to prove you have authority over the mint So I reckon you could use the same proof brawndo: noice so it can be removed/replaced, along with rest bloat right? I believe so gg will yeet then Nice one irrelevant thing: what was the rational/decision on not having an Unfreeze call? Freezing is used to create the guarantee that no more tokens of that type can ever be minted Allowing unfreeze breaks that promise yeah but ain't freezed token id public? I mean you can now which tokens have been freezed I agree with the rational, just discussing It should be public ah it also prevents flash inflation [unfreeze, mint, freeze[ Yeah once a mint is freezed, the token becomes a fixed-supply token yy good choice so to have an instantly fixed supply you just do a two parts call [mint supply, freeze] Precisely : Hey nighteous : hello there :D : So first it seems we need to implement a more generic CommitDomains, HashDomains, and those OrchardFixedBases stuff : It's the stuff that is used for example here: https://codeberg.org/darkrenaissance/darkfi/src/commit/0008360eb283e1a5b940e78f99e936408d4afeab/src/zk/vm.rs#L98-L99 : `HashDomains` seems to be a trait with a required function Q: https://codeberg.org/darkrenaissance/darkfi/src/commit/0008360eb283e1a5b940e78f99e936408d4afeab/src/sdk/src/crypto/constants/sinsemilla.rs#L93-L121 : Can you already see how this could be implemented more generic? : nope, not right of the bat. Might have to spend some time with it : I think we'd need some kind of generic struct that we fill with the generators' info : Because with the planned changes, we would need to transfer the info out of WASM into the host : So I think a struct containing all necessary fields could be filled with the constants, and then if these traits are implemented on them, it could work : In the simplest way, something in this manner: https://parazyd.org/pub/dev/random/traits.rs : The info transfer could be done with zkas_db_set : okayy so one generic struct for CommitDomains, HashDomains, and OrchardFixedBases (plus any other i guess?). I did read the comments above the struct definition in src/sdk/src/crypto/constants/fixed_bases.rs:71 but couldn't make alot of sense out of it. What exactly is OrchardFixedBases for. : It's an enum providing multiple different generator points that can be used for scalar multiplication : What we want to achieve is that instead of providing the hardcoded ones, it's more generic : got it : I'll start a bit of work on this in a new branch and we can continue from there : Just wanna see how the compiler will behave once I make certain changes : alrighty, I'll start whiteboarding and understand what I need to do. : o7 : don't forget our therapy app lol frieren: here? : hahaha I shall be using that :D hey upgrayedd two quick stuff: 1. src/contract/money/model/mod.rs::250 the encrypted note should be moved from MoneyAuthTokenMintParamsV1 to MoneyTokenMintParamsV1 where the coin is 2. src/contract/money/src/entrypoint/auth_token_mint_v1.rs::45-55 all these lines can be yeeted, since we don't check anything from the child call 1. no because that note is specific to auth_token_mint() additionally, since the 1 child call check is removed, we can chain N TokenMint calls, all having same AuthTokenMint call parent although the execution order is childer first, parent then, so we will first mint them, then check the auth 2. true and ++ for chaining I don't think that is a problem, since if auth is erroneous, we will ditch the whole tx 1. aha so thats specific for money huh? i think the assert is just for safety since other contracts might use diff enc_note methods 1. is specific to auth_token_mint() yes correct ok then since 1. is wrong, 2. also can't be done here's an example. it is not related to token_mint() but it is relevant. since the auth call has the note, so the 1->1 relation must remain so its good the have the check yes with the dao when doing a transfer, we make authenticated encrypted coins we just don't use any of the child data currently it is duplicating money::transfer() encrypted coins (which are not authenticated) i think 2 can be done 2 can only be done if MoneyAuthTokenMintParamsV1 has a vec of enc_notes one for each child then you assert/check enc_notes.len() == call.children.len() 2 as in chaining multiple mint calls to single auth call ah ic I can do it right now lol because the coin is in token_mint not a big hussle yeah exactly the coin is in token_mint, while its note is in auth_token_mint check bin/drk/src/money.rs::744-753 on how we handle it for example in the wallet to parse our coins nbd we can always make it a vec later on, but just wary about changing things around too much true lets kiss for now o,o i forgot my laptop's password *crying-emoji* nighteous: Keep It Stupid Simple last few days i'm trying to remember oh no frieren, if it is linux then just chroot to reset it? encrypted password lol encrypted disk yeah seems i miss a special character lol my condolences lol time to try gentoo i guess transition to being a gentoolman I'd jump in and say, use void i am using void but they started censorsing packages in their repos for political reasons. i use linux for freedom of my computing gentoo looks good rn https://www.gentoo.org/news/2023/12/29/Gentoo-binary.html lads can't even keep a distro running nowadays without involving politics, jesus. Guess I'll hop onto gentoo as well once I get my rig gentoo is a much bigger and older linux community with a huge wiki, much better than void imo, esp now with binary pkgs oh yeah, void's wiki is somewhat on the good side but ofcourse no comparision to gentoo or arch's wiki. Community sizes cannot be compared lol. https://cdn.hejto.pl/uploads/posts/images/1200x900/0a1acb8e8c1cca439972b1dbc1d85e59.png XD who knew the meme was right all along :D gm sorry about your password frieren. RIP lain: managed to make net::tests::p2p_test infinite loop again :D frieren: found a nice rust annoynance src/contract/dao/src/entrypoint/auth_xfer.rs::147 you see here we check the bytes match if you change the MoneyFunction::TransferV1 byte from lets say 0x04 to 0x02, the compiler doesn't detect that enum value as changed therefore not recompiling the dao contract, making tests fail well that seems more an issue with the makefile i have a ton of annoyances with rust like not being able to make a tree structure or having to spam Arc> constantly makefile just calls cargo more of a cargo cache thing since money is compiled first, but for dao, its like nothing changed, since the enum is used, not they actual byte value darkfi/src/contract/dao/Makefile:27 WASM_SRC should specify it specify what? money contract? does it recompile when running the rust command? if not then yeah its a rust issue which rust command? for tree, check sdk/dark_tree code lol yeah arc> is hella annoying #safety_first oh yeah we had the same experience i wrote a tree too (see bin/drk/src/scene.rs) so pointless having to use a usize instead of a pointer gonna check that another time, wanna finish deabloating but ++ on pointer : @skoupidi pushed 4 commits to master: e3e8e3a8be: contract/money: removed unused coin from auth token mint call : @skoupidi pushed 4 commits to master: aa1c5d80d2: contract/money: shuffled functions enum : @skoupidi pushed 4 commits to master: 8280857a00: contract/money: renamed TokenFreeze to AuthTokenFreeze and removed redundant proof : @skoupidi pushed 4 commits to master: e262b08a85: src/system/condvar: chore clippy : @dasman pushed 1 commit to master: 840d3e745e: bin/deg: fix graphing intertwined merges and forks gm gm : gm : you can embed python with restricted access to host (for UI) : so i'll do that : curious me would to know where python is used in the UI and the dependency is so tight that you need to embed it?? : gm draoi: around? yes finally we got the eternal loop cought in the ci: https://github.com/darkrenaissance/darkfi/actions/runs/9100621532/job/25015882362 ah nice one ty glhf with it :D : nighteous: check out how blender uses python to enable plugins : but more specifically for my use case, i want to allow dynamic expressions like: max(50% current layer - 10px, 34px) : max(lw/2 - 10, 34) # where (lw, lh) are locals bound when the expression is evaluated : that way i can specify the width of a UI element which dynamically resizes when the screen changes : https://agorism.dev/uploads/main.rs : with this and a z-index, you can do everything. you don't need layouters and other stuff : > you can embed python with restricted access to host : Care to elaborate more? : yeah this is specific to UI, but before i thought for making the UI you need wasm but turns out i can do it with python : I'm wondering more how the sandbox is implemented? : ah check my linked code there : i can disable __builtins__ and everything really, and then going further there's things like safe_eval, RestrictedPython (from zope proj), and PyPy sandbox mode : idc about resource usage, just about accessing computer resources like network/files. RestrictedPython utility_builtins seems like a good option + we provide our own module for python code : https://restrictedpython.readthedocs.io/en/latest/usage/policy.html : Interesting, didn't know about this : you can still embed python in wasm, so drk using wasm is good, people can optionally use python there but it's running over wasm : UI has different security needs : https://wasmer.io/posts/py2wasm-a-python-to-wasm-compiler : ACTION tried to write a tree in rust without using Arc... regrets it and now gives in frieren: here? : Aha so it was the tree in rust for choosing python : no : it's because you want to script your UI, for example i can set the x property to min(lw/2 - 10, 34). rather than catching the resize window signal, the render loop just calculates it directly by evaluating the python code : (should have added the /s my bad...) : or for example: is_visible = "drk.get_property('/ui/some_checkbox', 'is_selected')" : ah ok nw, internet hard to get context sometimes lol : Understandable lol : also interesting: https://github.com/bazelbuild/starlark/?tab=readme-ov-file#design-principles : > Hermetic execution. Execution cannot access the file system, network, system clock. It is safe to execute untrusted code. : a reduced subset of python, there's a rust impl: https://github.com/facebook/starlark-rust frieren: here? hey upgrayedd frieren: yo, changed the order of auth_mint and mint before auth was the parent and mint the child in the tx call : @skoupidi pushed 1 commit to master: f52b4573e9: contract/money: switched auth_token_mint and token_mint execution order but that means mint executed first, then we checked the auth call ACTION head will explode now the auth is the child and mint is the parent, which is fine since we are on a 1-1 call wanted to ask if you have some logic like that in dao using parent-children stuff from the tx call ok great, so i guess we can think of it like auth is the callback passed into token_mint to make sure the execution order is correct yes we do, check auth_xfer.rs let sibling_idx = call_idx + 1; yeah this is checking the sibling here you always assume the xfer_call is the next call it's weird because we don't have a graph, just a tree so the order is enforced trees are graphs wdym XD sry i mean we only have a tree, not a full graph whats the issue? or limitation DAO::exec() can call any function, not just money::transfer() yeah whats the supposed order of these? when you want to make a proposal to call money::transfer(), you actually use DAO::auth_xfer(), but DAO::exec() only allows its child to be DAO::auth_xfer(), so we make money::transfer() the sibling. I also see let parent_idx = calls[call_idx].parent_index.unwrap(); that means auth_xfer is child to exec correct? so the order should be [auth_xfer, exec, transfer] yep i think it's: [auth_xfer, transfer, exec] actually nvm you're right no you are correct transfer is a simpling of auth_xfer and enforced to be on next idx ah yeah because of the spend_hook: the coin can only be spent (money::transfer()) when called by DAO::exec() so the order is [auth_xfer, transfer, exec] and I assume both auth_xfer and transfer have exec as their parent correct? but DAO::exec() MUST have auth_xfer() as its first child yep ok you got it correct here noice so you see the reason why we enforce in auth_xfer() that transfer() is the sibling of auth_xfer yeah makes total sense i can imagine more complex contracts might have a limitation here in the future, in which case we can introduce a more complex graph structure with properties .etc but for now i managed to hack the tree I don't think you need more complex stuff lets see lol each call should have its auth call so the generic order should be: [auth_call, call, exec] yeah but money::transfer() parent must be exec, but exec's child must be auth_call() parents have multiple childs its 1->N so yeah, both auth_call and call have exec as their parent maybe you have a situation where dao::exec() wants to call dao::exec() or sth weird lol idk and exec child are [auth_call, call], in that order ++ oh you can do that its a tree call then has to be a subtree of its own auth_call' and call' true, it's nbd because we: 1. our current needs are fully met so the main tree will become: [auth_call, auth_call', call', exec] 2. we have a proposed upgrade if we ever need more (let me guess... you need more) where auth_call and call' are child to exec and auth_call' is child to call' all properties/order remains the same/enforced one as in normal auth, call, exec true i guess the way to think of the tree is like a filesystem where the directories have owners exactly! check src/sdk/src/dark_tree.rs::855-867 but then when i think of token_mint calling auth_token_mint, it's like doing: token_mint(auth_token_mint__callback), and then token_mint calls auth_token_mint__callback() on completion you see pretty much expands like file system folders tree where on top you have /, then /dev,/proc etc.. etc idk maybe overthinking too much yep i checked this code, it's nice and simple my thinking is: we have a mint which is depended on an auth, auth is not a callback in that case, as it should always be checked first unless I'm missing how you use the term callback I use it as in case or failure and/or after execution s,or,of ah wait, nvm your way is correct actually it matches the same as the DAO yeah enforced order I reckon it becomes confusing when flattening the trees you are now officially graduate of anonymous engineer, congratulations since then you "need" to know some graph/tree traversal properties/algos to properly understand whats/why XD lmao were another useles dyploma to hang on my wall? not useless, you can make youtube explainer vids for midwits PHD anon trickery lmao : @skoupidi pushed 1 commit to master: c37618f354: drk: fixed token freezing and added its fee call > ask how to do thing in ##rust > "oh trust me you don't want to do that" ACTION :/ like clockwork lmao let me shoot my foot ffs lmao its probably not possible thats why they say it /s some zoomzoom read the /r/rust and now will school YOU on what is good programming "Sir I totally agree with your advice about good programming, but, the program not running is definitely a bigger problem" frieren here? hey upgrayedd, just finishing, it's my bad time... imma crashing frieren: gonna be quick : @rsx pushed 1 commit to master: 199485d071: wallet: impl properties with PY_EXPR subtype which are dynamically evaluated every frame. context: saw a dummy check in swap, namely: src/contract/money/src/entrypoint/swap_v1.rs::118-124 frieren: Some spend_hook stuff is unclear, so help us out :) that check was checking the spendhook being zero, right now it does something else, witch is bloat the question is: since dao needed a special auth_xfer for enforsing the spend hook for transfer does it also needs a special auth_swap to enforce the spend_hoof for swaps? and to generalize even more: every contract needs the special auth call to be able to use native calls like transfer and swap? yes correct it needs a special auth_swap ok so those lines are useless and will be removed altho you could extend auth_xfer using a tag for example well nvm ignore that last line ACTION realizes we have smart contract functions for this very reason frieren: My thinking is that if we need an auth_* for all kinds of stuff, it's a deep rabbit hole yep Since it ends up needing the DAO contract maintainer to implement an auth for anything not the dao contract maintainer anybody can make an auth module for example if i make a streaming payments api, i might publish a function so it can be composed with daos .etc I see so the logic is forward is non native contracts implement a special auth_* call if they want to use native stuff correct? frieren, brawndo: ^^ wdym by "the logic is forward is non native"? error: parsing english the logic forward is XD don't correct my english, I have zero respect for this language esperanto? si saluton, kiel vi anyway jokes aside, we all clear? > upgrayedd │ so the logic is forward is non native contracts implement a special auth_* call if they want to use native stuff so the old DAO model didn't need auth modules, but it was not composable greptile said it should be able to call any smart contract function but technically if you just want to call money::transfer() then you don't need the auth module the auth module is only required if when you spend stuff hold by the contract I think so you enforce the spend_hooks you don't need the auth_module if DAO::exec() if it is specialized for money::transfer() (not generic) then you can put the auth_xfer() logic directly in exec() but if you do that, then it's not generically composable with other contracts - and so then you cannot call any function I think I lost you here by splitting the logic which is specific for money::transfer() from DAO::exec() and putting it into DAO::auth_xfer(), then DAO::exec() can work with other contracts such as DAO::auth_swap() yeah what we are saying is that the special auth_* calls are needed when the contract needs to execute native stuff for the contract holdings you should get a wacom tablet so we can do therapy together like for example drk store in the contract, or contract token liquidity shit like that not native stuff, any contract call therapy <3 DAO::exec() can call any smart contract function frieren: Consider the scenario where we want to execute a swap with coins from the DAO So it means that the swap tx would also have to contain a DAO::exec() ? yes darkfi/src/contract/dao/src/model.rs:144 pub auth_calls: Vec, upgrayedd: So that's what's missing ^ yeah so native calls shouldn't care about the spend_hook since the enforcment is moved to the contracts creators to have the corresponding auth calls Hopefully :D it goes both ways, native coins must have the spend_hook enforced and the spend_hook is DAO::exec() which checks the children match the auth calls set in the proposal check these lines darkfi/src/contract/test-harness/src/dao_propose.rs:111 wdym native coins must have the spend_hook enforced/ ? this is the children for DAO::exec() its not checked anywhere in transfer yes it is can you point me? darkfi/src/contract/money/src/entrypoint/transfer_v1.rs:61 we pass it directly to the proof Yeah so the assumption is that the coin is built with the proper spend_hook oh that's the only way the coin is considered in the DAO's treasury So it can't be cheated as long as the proof proves it and since swap uses the same transfer call, its also enforced there Swap is a different call than transfer btw did you see the dao_propose.rs:111 above? brawndo: same metadata call ah yes ACTION is glad for the extra eyes ACTION bows freiren: which dao_propose? give full paths, I'm tired XD yes i did frieren │ check these lines darkfi/src/contract/test-harness/src/dao_propose.rs:111 frieren │ darkfi/src/contract/dao/src/model.rs:144 anyway i'm going to lay down... i'm sweating/headache (last day of a mild cold) I probably missed those lines yeah I saw how the dao transfer works when we had the other talk XD this is the auth calls yy I know : @skoupidi pushed 1 commit to master: c3433efe2e: drk: cleanup on swap aisle ok I'm also done for the day brawndo, frieren: thanks for the talk :D peace out np always a pleasure cya o/ cya gn gn o/ gm : gm : gm : o/ : wtf these benchmarks make no sense: : 226.065µs - facebooks starlark (precompiled ast, just running the expr) : 99.427µs - pyo3 (not precompiled) : 1.052µs - my own s-tree expr eval : oh i guess with pyo3 i'm not counting initializing my locals, in which case it becomes 6ms : gm : darkirc been running rock solid for a while now on my end : that's good, few updating incoming tho : *updates : i did notice i've been running both darkirc and ircd in patchy internet and the darkirc reconnect is much more reliable : nice : gm, can please let me know if anybody has insights on any of the following: : 1. is there a max size limit for the number of signatures a transaction can have (tx.signatures.len())? : 2. is there a max size limit established for serialized transaction data (i.e., serialize_async(tx).await.len())? : 3. are there max size limits for the following darkfi::zkas::decoder::ZkBinary vectors: constants, literals, witnesses, and opcodes? : thank you in advance : ^ these limits are not yet set, and they are checked by the consensus code : so the compiler .etc doesn't have to worry about them : hey brawndo, any idea how to resolve this: https://agorism.dev/uploads/foo.rs.txt : the commented line gives me: : error[E0275]: overflow evaluating the requirement `&mut Vec: std::io::Write` : = note: required for `&mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut &mut ...` to implement `std::io::Write` : it's because of &mut s, then recursively it becomes: &mut &mut s .etc : i'm not sure how to encode 2 values since they both need to take ownership of s : i guess i must use an intermediate buffer? : it works if i do: let mut buff = vec![]; len += lhs.encode(&mut buff)?; buff.encode(&mut s)?; : Sec need to reproduce : let me give you a test, one sec : i have it handy : The .txt you gave me compiles : yeah one sec : btw without looking much, maybe you ran into an issue that I mitigated in async-serial : https://codeberg.org/darkrenaissance/darkfi/src/branch/master/src/serial/src/async_lib.rs#L464 : Note how here I put `s: &mut S` instead of `mut s: S` : brawndo: https://agorism.dev/uploads/recursive_encode.zip : check out src/main.rs, uncomment the line to see the error : buff not found in scope :P : change to s : len += lhs.encode(&mut s)?; : anyway nbd, cos i can just do the workaround : Gimme a bit : frieren: Here, patch darkfi-serial with this and see if it helps: https://termbin.com/72de : In cargo.toml you can: darkfi-serial = { path = "/home/user/darkfi/src/serial" } : You also need to do minor modifications in your main.rs, but the compiler should tell you : ++ : it works, should i switch to this? i think maybe SerialEncodable .etc is broken : i also have the same issue for decode too : It's not a full patch : But ok if it's fine, I can patch everything : What do you reckon? : do you think it's better? i don't see any downside : seems an improvement if it allows recursive serial/deserial : Yes I ran into this when I was doing the async-serial : And chose this approach from the patch, you can see in the codeberg link I pasted above : I'll make the switch then : yeah especially since i can't think of a workaround for Decode : i guess it's impossible without being able to rewind the decoder : Where? : 1 => Self::Add((Box::new(Self::decode(&mut d)?), Box::new(Self::decode(d)?))), : mm : to workaround this, you'd need to read the bytes from d without capturing it, then advance d, but then you need to rewind it : If that doesn't work, you could just make the boxes outside of that scope : let lhs = Self::decode..., let rhs = Self::decode... : it's not the scope, the issue is Self::decode(&mut d) is recursive so you get &mut &mut &mut ... d : ok, need a few to make the patch and then we can see what's up : ah i guess it would work actually if i make an internal function like Self::decode_ref() which accepts d: &mut D instead of mut d: D : ok that worked : fn decode_ref(d: &mut D) -> std::result::Result { Self::decode(d) } : 1 => Self::Add((Box::new(Self::decode_ref(&mut d)?), Box::new(Self::decode(d)?))), : brawndo: ^ : oh nvm i had my code commented : it doesnt work lol : @parazyd pushed 1 commit to master: f87382e856: serial: Use mutable references for non-async {en,de}code functions : okay : frieren: What doesn't work rn? : brawndo: https://agorism.dev/uploads/recursive_encode.zip : one sec : oh nvm it works! : thanks a lot : ah great : yw :) : As a bonus now both the async and non-async functions use the same type of reference : >mr. proper : ACTION "dwjdwjdwjdwjdwj..." : well dwdwjdwdwj : wut : lmao, you didnt have to do but thanks a lot : ^ vim actions : hahaha : vim macros rock : Then I just spam @ : you should try mapping Q = @q, it's a gamechanger : :D : Gamer bindings : shiiet it werks : yeah recently i also mapped z to Plug 'chaoren/vim-wordmotion', and i like that a lot : FOO_BAR or FooBar then z will jump to the B in both of those : In vim there was another nice one : Something very similar to vim-sneak but not vim-sneak : Now in vis I use sneak : also cargo lrun with entr is good, the default cargo run is too spammy with warnings : I think it was this: https://github.com/easymotion/vim-easymotion : oh cool i need to try sneak : But sneak is gud too https://github.com/justinmk/vim-sneak : Yeah it's like a mouse really : Pretty useful : btw one thing to keep in mind that this serial change required : https://codeberg.org/darkrenaissance/darkfi/commit/f87382e8564c27d6b1ee95bbc660410ae72511d8#diff-0ab50fe3ac8f6eeb4cf43451e58545366407dc27 : In sdk/note.rs it required a cursor because it couldn't work on the slice directly : Also here I'm not sure if it's undefined behaviour or it goes in order: https://codeberg.org/darkrenaissance/darkfi/commit/f87382e8564c27d6b1ee95bbc660410ae72511d8#diff-5037e295ab612d4435c53b37d585c77027c5b784 : When you have a tuple of (Decode(), Decode()), is lhs done first? I guess so : biab : o/ : @rsx pushed 1 commit to master: 76d1ec7a04: wallet: add an expr engine for fast evaluation of exprs inside render loops : yeah tuples should be ordered by lhs first : what is undefined behaviour there? the commit is long : oh ic you were linking the tuple nvm, it looks good : The links would take you to the location : Yeah : i finally got this committed https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/darkwallet/src/expr.rs : Nice : How is this gonna be used? : so for example you want to put something on the screen which resizes dynamically : for example a box which has a margin of 20 pixels would be: x = 20, w = layer_width - 2*20 : having to catch the resize event and then change the property is cumbersome, it's much easier to put code there which is evaluated in the render loop : python is like 100 ms, whereas this is 1 microsec : check out darkfi/bin/darkwallet/gui/__init__.py:273 : draw() : code = [["as_u32", ["/", ["load", "sh"], ["u32", 2]]]] : api.set_property_expr(layer_id, "rect", 3, code) : *python is 5 ms (just checked) : my code: 2.524µs : ah I see : b : @rsx pushed 2 commits to master: 633aeb447a: wallet: instantly show the window as priority : @rsx pushed 2 commits to master: 62994d050b: wallet: deprecate pyo2, replace with rustpython : why did i write deprecate, i meant remove lol : frieren: Hey, wondering about some stuff regarding transactions, signatures, and fee stuff brawndo: please talk here XD : If we would add 3 items to the Transaction struct: fee_call, fee_proof, fee_signature brawndo: no need to add items we can filter directly from the existing vecs : Would this allow supporting anyone to attach a fee payment to an existing transaction? And do you see any attacks/issues with this approach? : In practice, person A can create a tx with some calls, and sign them - without signing anything related to fees : And person B can append a fee call and sign the entire tx - plus the things related to fees upgrayedd: Yeah also true : (see ircd #dev too) upgrayedd: Probably better that way too, to preserve any call order yeah exactly, and minimal changes upstream we just change the src/tx/mod.rs::verify_sigs() logic What if the fee payer needs to merge some coins in order to pay the fee first? and the signatures attachgment during txs creation Would they also be able to append that Money::Transfer? yeah they would How would the sigs for that be verified? actually wait no we still have the consuming problem (delayed tx after modified merkle tree) aha lets assume we don't have that issue the flow would be exactly the same, the user creates a transfer call merging their coins and sign that(along with rest calls of the tx), and then create a fee call using output of that call, and sign the full tx calls + fee call so what we are discussing is decoupling the fee call signature But the previous calls would have invalid sigs that way where in a Transaction, every other signature must be over the tx calls excluding fee, while the fee signature must be for everything oh yeah true to do that, each signature must be only for its corresponing calls if we do that actually, we directly support that without extra hustlle Which breaks everything since it becomes trivial to tamper ;) yy ik (was gonna right, BUT ok so the only thing is either everyone signs everything or we just allow the special specific case of Fee to be its own thing aka everyone still signs everything excluding fee, and fee attached signs everything The latter is fine, there's just the issue of it requiring a big enough utxo well we are entering ux teritory here tho but again, unless we solve the consuming problem we can't tackle this True anyway lets wait for frieren input my gun is loaded to make the changes XD :D : @skoupidi pushed 1 commit to master: 4661d797cd: drk: attach-fee added and fixed otc swap : @skoupidi pushed 1 commit to master: 0243560e1f: drk: added fee call to deployoOor calls hey just catching up bitcoin uses SIGHASH flags for this: https://learn.saylor.org/mod/book/view.php?id=36341&chapterid=18919 ALL, NONE, SINGLE and ANYONECANPAY > Bitcoin signatures have a way of indicating which part of a transaction's data is included in the hash signed by the private key using a SIGHASH flag. The SIGHASH flag is a single byte that is appended to the signature. Every signature has a SIGHASH flag and the flag can be different from input to input. so for example, i can imagine a flag which says "sign all calls before this one, but none after" frieren: yes but is it safe? the main thing signatures protect against is stopping people modifying the tx i created, so if i want to then say "this tx CAN be modified" then that's on me namely if i say "calls can be APPENDED to this tx" (hence this sig only signs calls before and including this one, but none after) that way people can add things like fee calls to the tx aha so moving the responsibility to the tx creator(user) yep they are responsible I wouldn't agree on that statement XD it could even be a list of call indexes which it is signing (or a list of ranges) I still think the best: everyone sign everything and only allow the fee call to be an exception my rational is: every party of the tx decided collectively to delegate the fee, therefore they must all have signed everything excluding the fee, then the fee must have signed everything including the fee ensuring the fee payer didn't mess up the tx other use cases, like allowing part signing, seems a bit dubious to me but happy to read/get schooled on its safety i agree in general you should sign everything i just don't like making weird exceptions for specific calls well yeah hence why we mentioned entering ux territory always my approach is to look for the most general mechanisms that fit all usecases fee call is added in all money(and deployoOor) calls in drk so I tested everything and it works with fees only "weird" case is the otc so the proposal using the flags to indicate which parts of the tx that is signed would allow appending fee calls were you have to do a double round of coms to sign everything yeah but otc you build the entire tx then sign so it's not related to this init -> join -> sign -> attach fee -> sign -> sign init -> join -> attach fee -> sign -> sign -> sign if we allow the fee exception, it can become: init -> join -> attach fee -> sign -> sign first we construct the tx then everybody signs well join and attach fee also sing hence why I excluded yeah we said the same thing :D A: creates tx with her input and output A: attachs a fee call B: adds his input and output B: attaches a fee call A: signs B: signs B: broadcasts only one fee call exists so its attached after everything else is added oh yeah it should be: A: creates tx with her input and output A: attachs a fee call B: adds his input and output B: attaches a fee call wait actually it's correct there's 2 fee calls no all txs contain a single fee call the order is: A: creats tx with her input and output why is there a single fee call? B: adds his input and output C {whichever between A, B or anyone else}: attaches fee call and signs it A: signs B: signs whoever broadcasts ok for fee related questions ask brawndo :D but the rational is single input, for anonymity iirc ok gonna unwind, cya tmrw yeah we can discuss this in the future not really needed right now glhf : @dasman pushed 1 commit to master: b83e2a782f: bin/deg: fix multiple args : gm gm : hey : sirs hello going to gym will bbl : oh why hello there, good sir nicee : gm : @rsx pushed 2 commits to master: 1369300ba2: wallet: add PropertyStatus.EXPR : @rsx pushed 2 commits to master: 28aaec3bba: wallet: add stubs for plugin subsystem : hey frieren : around? : https://github.com/rust-lang/socket2/issues/466 : socket2 connect_timeout() is blocking : hey : https://pastebin.com/QrW8Zczm : basically, socket2 connect_timeout() blocks in the case where the timeout is reached : (code sample above illustrates) : so even if we stop the connector, it will still wait outbound_connect_timeout seconds before p2p can shutdown : i didn't write this code : oh ok : the original one was using smol::TcpStream which is async : https://docs.rs/smol/latest/smol/net/struct.TcpStream.html : ahh : i'm not sure why that was changed, would be useful if you find the commit (you could use git bisect) : ++ : ok it's this lib https://github.com/smol-rs/async-net : look at the code : async-net/src/tcp.rs:386 : you can use this https://github.com/smol-rs/async-io : gosh smol is a good proj : ah nice : this is the commit where parazyd added socket2: c417ff9d6450f14028fbe2a0dead1b47f33c9513 : i guess .connect_timeout() is overriding the set_nonblocking() call : i think dial() in Transport should be async : oh it is async : i think this issue is here https://docs.rs/socket2/latest/src/socket2/socket.rs.html#241 : but it's discussed at length on the issue : https://github.com/rust-lang/socket2/issues/466 : draoi: i'm looking at the code in darkfi, where is connect call? i see sth else in transport/tcp.rs : oh nvm ic : ok yeah nice find, this should be fixed : from the issue: "You can call socket.set_nonblocking(true) and then do the poll(2) calls yourself" - i tried doing the first thing but haven't tried manually polling : i think parazyd did this because of TcpDialer::create_socket(), since they/them wanted to set all those options : essentially we have a Socket, and we then want to call connect(), but Async::connect(addr) doesn't accept the socket : addr -> Socket -> ... (out config) ... -> connect -> Async::from(stream) : vs : addr -> Async::connect(addr) : ++ : draoi: i think you shouldn't use .connect_timeout(), just normal .connect() (nonblocking) : i'm looking at async-io code async-io/src/lib.rs:2137 : async-io/src/lib.rs:2166 : async-io/src/lib.rs:1523 : that's what smol is doing : check out this: : /// it is not set. If not set to non-blocking mode, I/O operations may block the current thread : /// and cause a deadlock in an asynchronous context. : async-io/src/lib.rs:679 : thanks, checking : socket2 connect() is blocking tho : if we have a non blocking connect() that's perfect cos we can implement our own timeout easily using futures::select : connect() should not block if set to nonblocking mode : check out this function: async-io/src/lib.rs:1512 : in particular these lines: : this will start connection, but note it is not async. the socket is set to nonblocking and the connect is started: : let socket = connect(sock_addr, domain, Some(rn::ipproto::TCP))?; : then we construct the Async wrapper around socket: : let stream = Async::new_nonblocking(TcpStream::from(socket))?; : ahh bingo : lastly we want to async wait until the socket becomes available so we call .writable() : stream.writable().await?; : ^ so this is how you do async connect : ok i'll try that, thanks a lot : np, if you look at writable(), you'll see it's calling async-io/src/reactor.rs:494, which then calls the platform dependent branches to poll for when the socket is ready : /// Waits until the I/O source is writable. : pub(crate) fn writable(handle: &crate::Async) -> Writable<'_, T> { : (async function) : ++ : 1. sync start socket connect (but do not wait, return immediately), 2. wrap socket obj in async wrapper 3. wait until async obj becomes writable : ty : ++ : afk anyone here like javascript? Or use it regularly? : so fucking weird that dropping a thread::JoinHandle means the thread keeps running : and there's no way to kill a thread in rust : idiots : "killing a thread is not good programming" /s : yeah but not my fault the OS api blocks, or other APIs such as user code could block indefinitely : so I need a mechanism to timeout bad code and kill the thread : maybe by using some system call to find and kill that thread? doesn't sound right to me though deki: nope I don't (I try to avoid it as much as I can and so far managed to write only 10 lines in js lol) hehe fair, about the same for me anyway was just curious if there were any js types here the web was a mistake, https://tonsky.me/blog/js-bloat/ : google chrome uses a separate process per tab which is killed if unresponsive : imagine that rust prevents you doing that : any particular reason for rust not letting you do that? I mean, there must be some logical explanation to that... : wait a second, so there is no parent child relation between threads in rust, that just means zombie threads? as far as my understanding goes this is a good read this site about js-bloat lol : yes exactly, it's so dumb : see libera ##rust : i swear they're brainwashed : oh nice https://www.chromium.org/developers/design-documents/inter-process-communication/ : chromium uses processes, not threads and named pipes : nice https://github.com/smol-rs/async-process : smol keeps delivering : i know right : majority of the rust lang users are brainwashed by their favourite influencers : also frieren, can't you do what I suggested? Don't you get a thread id when spawning a thread, could use that to kill it? (not sure though cause I haven't spawned any threads myself) : threads don't have IDs, only processes do : huh... well atleast in python threads do have ids cause I have been logging it. (gimme a second, I am sure I read something about rust thread id too, looking for it in the history) : https://doc.rust-lang.org/std/thread/struct.ThreadId.html > "ThreadIds are under the control of Rust’s standard library and there may not be any relationship between ThreadId and the underlying platform’s notion of a thread identifier – the two concepts cannot, therefore, be used interchangeably" : my bad I was wrong : yeah i'll use processes. I actually already have the code : nicee : what about this discussion how to kill a rust thread ? : https://users.rust-lang.org/t/how-to-kill-a-thread/96422/8 : checked it out, isn't it related to this javascript+rust? (documentation for terminate_execute function mentioned in the thread does say its for JS) : dasman stopped task (yWx0dR): event graph tool : dasman stopped task (SjJ2OA): event graph replayer : dasman stopped task (kVNZYs): read only key so users can view tasks : dasman reassigned task (FQsM3c): handle darkirc tasks stop to @xeno : @dasman pushed 1 commit to master: 8c455abd0a: bin/deg: drop deg config, and use args instead : gm : that discussion also says to make an external process : gm o/ : gm : !list : @darkfi pushed 1 commit to master: e72751a02b: book: contrib tor section, add info on setting gitconfig name/email. : @darkfi pushed 1 commit to master: aa6cff4f2e: wallet: add a z_index property to control draw ordering of objs : gm : o/ : variation on kill-a-thread-in-rust generated by gpt-4o, it decided to read stop flag on the first try, but I think it would eventually kill the thread when I'd keep asking : https://play.rust-lang.org/?version=beta&mode=debug&edition=2015&gist=73cbaabe09fed4f7c265dfb0f720de4f : my Rust experience ended 2 years ago when I finished rustlings :D : hey frieren regarding our convo yday, the idea is to replicate the non-blocking connections as in the smol async-io lib, including the cross platform stuff? : the polling of the socket requires different methods depending on unix or windows : https://stackoverflow.com/questions/69114288/rust-tcpstreamas-raw-fd-on-windows : uhh : nm still researching : bit sick today mind fuzzy : nw : brawndo, the generic struct for CommitDomains, HashDomains, and those OrchardFixedBases stuff has started to make sense to me, so I'll try doing something with it. Is the branch that you were talking about related to the generic struct? : Message for reference : > I'll start a bit of work on this in a new branch and we can continue from there : > Just wanna see how the compiler will behave once I make certain changes : gn : hihi! : gm : frieren: you sound pretty excited? new week new you? lol : yeah excited to make progress on my works : i started taking ashwaganda, glycine and inositol supplements and my sleep is so powerful : damn nice : how much do you sleep to wake up this excited along with progress work. No matter how much progress I made, I wake up like this zombie every morning in need for caffeine to even talk like a human : gm : greets : o/ : nighteous: you should try delaying drinking caffeine for 1-2 hours after waking to avoid that feeling : nighteous: Yeah it seems we have to provide an implementation of those that are filled dynamically : ++ on caffeine, eat first : So like a struct that takes all the generator data (including z and u) and then I think the trait can be implemented on that struct : I also think we need two different structs because for example ValueCommitV is "short", which means internally in the ECC halo2 chip, it performs a 64-bit range check : And in the constants you can see it has U_SHORT and Z_SHORT : Wonder if one could do the magic but sure, two structs : also frieren, brawndo: sure I'll try doing that. Lets see how it goes : See how there is `impl FixedPoint for OrchardFixedBasesFull` : So basically instead of the enums, it could just use self : The struct would be Generator { generator: pallas::Affine, u: Vec<[[u8; 32]; H]>, z: Vec }; : Then `fn generator(&self) -> pallas::Affine { self.generator }` : Something in that manner : (This is in fixed_bases.rs) : hmm hmm got it : basic rust question, if we have two different structs (one for short and one for normal values) then would we also need two different impls with the same functionalities? : Yep but it's not a problem : It's already happening anyway for ShortScalar and FullScalar : ohh okay then : Yeah no way around it : ACTION biab, breakfast : fair yeah, python has spoiled me quite the bit. "Oh the type doesn't match? here I'll force it to be this now" lol : :D : python is great, i dont get why ppl prefer javascript : Probably because life's in the browser now : well, youtube indian man tutorials are in javascript. People were told webdev pays alot so you can guess what happened. They decided not to learn something else and live in the javascript bubble (atleast this is what I saw amongst my peers (interns or uni students)). I do tell some of them to learn something else but they just say "it is too late now to learn something different" : the problem with india and eastern places is the family push their kids to get a job and be respectable : so the kids end up doing something they dont believe in : also lol "its too late" ngmi friend : Yepp but many don't even care about what they believe in. They just want money to party all night and not work. : ^ this : It's degenerate : indeed is. The kids are treated as retirement plans of adults over here lol : omg i knew a junior dev and the amount of family he was supporting on his tiny salary was crazy : i told him it's not your responsibility but he just was not able to say no : You are not allowed to say no. It is basically asking for a really bad conflict and mental exhaustion (not that you don't get exhausted the other way) : also how do you guys use action? : /me : ACTION shrugges does it work? : nicee : lol : frieren: ohh also also, just struck my mind. The parents here don't really care about you much. They care about their reputation in society more than your beliefs. Get a "good job s : "good job" so that they can show off you lol : (misclicked my bad) : upsides: you have a support network, strong society downsides: doesn't reward ambition, conformity : maybe a bit dense but my old notes on this topic: https://agorism.dev/uploads/demo.html : based on this https://www.jstor.org/stable/1964012 : what do you mean by "support network"? I think I am mixing it up with the family supporting the kids : gm. when will ircd be deprecated? : you will always have relatives to offer you a home : when growing up in a western country, i had friends who at 16 were forced by their parents to leave the home, get a job and pay rent to a landlord : the landlords are scumbags and scam you : people are in general very lost and there's pressure to pay rent/bills (otherwise you're homeless). you're on your own : reka: give a week or 2 : reka: I think today we can do the final dev meeting on ircd and then move : It's been working totally fine this week : oh shit its monday : hahaha : lmao : great, ty !list No topics : frieren: ah well that is one thing I would agree with. There is always a place to be at but hm not sure how much I'd like it cause instead of rent, you pay with mental sanity points : ancient societies had temples and patronage for artisans, philosophers and mathematicians, but in the modern world this is replaced by university which is a degenerate form of this old model : anyway lets move to #philosophy : was about to say that and then rushed out for lunch XD hey is there a doc link for installing the current version of darkirc faustian: https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html ty do i need to run it through tor it is good practice to do so. (I don't though cause it doesn't work for me for some reason then) alright yeah i used to have conflicts with my vpn and ircd faustian: don't run it through tor just yet, but yeah it's the eventual plan to use tor ok frieren: so it doesnt work to run through tor? it should do, but we need to test it i don't want to hassle fastian with that ++ !topic zkvm constants plan (python approach, rust approach) Added topic: zkvm constants plan (python approach, rust approach) (by brawndo) frieren: here? gmgm o/ hey : @skoupidi pushed 3 commits to master: 5c126999a1: contract/money/client: replaced asserts with error returnes and added log targets : @skoupidi pushed 3 commits to master: 0fb90b0978: contract/deployooor/client: replaced asserts with error returns and added log targets : @skoupidi pushed 3 commits to master: e93b6cca95: contract/dao/{entrypoint, client}: replaced asserts with error returns and fixed log targets hey upgrayedd frieren: got 2 q check e93b6cca95fdbbe6db19f03c8d441753482de0fe in src/contract/dao/src/entrypoint/auth_xfer.rs::58 there was this assert!(!xfer_params.outputs.is_empty()); while in 158 assert!(xfer_params.outputs.len() > 1); I assumed these two assertions/checks must match, so made both to check >1 so the question here is: since in second check it was a comment stating that the last output is the change, this code assumes >1 outputs but what happens in the case where a dao proposal wants to consumes the whole treasury? is it an oversight or some kind of treasury drain protection? the second q is about smt: do each user/wallet need to keep a copy of the smt? since I don't see it store natively in the contract tree entries upgrayedd: It should be possible to set an output to 0 value So it doesn't matter functionally aha true forgot that so in that case they just make a zero value output, so the check is still valid Yep question for darkirc, if its suggested to not run over tor ( https://darkrenaissance.github.io/darkfi/misc/tor-darkirc.html ) , Is it fine to skip step 1? Yeah hosting a hidden service is for when you want others to be able to connect to your node through Tor But without that you can still connect to others hey when I run make BINS="darkirc" in the darkfi directory, i get an error: cannot specify features for packages outside of workspace SIN: check that you are on master not tag v0.4.1 How about just running `make darkirc` upgrayedd: auth_xfer.rs:58 shows inputs.is_empty() and outputs.len <= 1 am i missing something? freiren: the if is the opposite of assert, so assert!(!input.is_empty()) becomes if input.is_empty() ah yes it's an oversight, correct i mean about draining the treasury brawndo already answered yep :) o/ hi hi o/ o/ gm hi sup !topic darkirc migration Added topic: darkirc migration (by frieren) !topic deg usage Added topic: deg usage (by frieren) !topic darkwallet scenegraph and drk/darkirc integration Added topic: darkwallet scenegraph and drk/darkirc integration (by frieren) nighteous: yo !start Meeting started Topics: 1. zkvm constants plan (python approach, rust approach) (by brawndo) 2. darkirc migration (by frieren) 3. deg usage (by frieren) 4. darkwallet scenegraph and drk/darkirc integration (by frieren) Current topic: zkvm constants plan (python approach, rust approach) (by brawndo) what does 'python approach' mean? Hey so : nighteous: here? The idea is to have the "constants" made "dynamic". "Dynamic" in the sense that we don't hardcode them in sdk, but have them distributed by contracts/zkas-proofs I've been thinking about this, and there's two things to consider: 1) How they would be used in Python 2) How they would be used in Rust By being used in Python, I mean zkrunner and keeping it simple ahhh So I've come to a couple of conclusions and would like to discuss here: i think in zkrunner, you provide a json explicitly frieren: Can you wait until I finish? yep ty, this is something to discuss, not set in stone 1) For having the python usage simple, and not introducing any Rust complexity into it - I believe zkas should support defining the constants (EC points) by their coordinates e.g. EcFixedPoint MY_GENERATOR = (0x00....1, 0x00...2), Now, when we want to use this in zkrunner, we can have it run find_zs_and_us() directly rather than precomputing them. This keeps things very contained within python and there's no need to make extra bindings that touch contracts' code. Additionally, this functionality can actually be used to generate the Rust code that find_zs_and_us() produces - and it can be used in the contracts' Rust code. 2) For using the constants in Rust, we could use the above 1) approach, and leverage zkas to generate the constants data These would then become part of any contract's Rust API and they'd be accessible through using a contract as a crate dependency The final thing is using these in the ZKVM, which I believe can be done by having "placeholder" structs which implement the necessary traits required by the chips which use these constants Then these structs can be filled at runtime with the proper data that's contained inside the contract (or generated at runtime in case of Python) - letting the ZKVM use the provided constants An example would be: struct Generator { generator: pallas::Affine; u: UType, z: ZType }; impl FixedPoint for Generator {...}; This trait impl would simply access the fields in the struct, and the struct would be filled at runtime In WASM we need to transfer it from the wasm-vm memory into the host Or we could store it in the DB, but I think it's less overhead to just fetch it from mem, since it's only ever used once, when deploying a contract That's the idea, comments very welcome very good agree with it all & versatile approach are the names in zkas used as the keys, or are they just local names, but the (x, y) is the key? i guess with this, we use (x, y) as the key It sounds safer to use the latter, yeah Since we'd avoid the possible naming collision In zkas the names would be used just for the DSL, to know what to reference nighteous: Any feedback from you on this? can i import consts from other contracts? (Perhaps they're afk, nbd) to avoid redeclaring them redundantly frieren: Yes, it would work by specifying the contract as a Rust dependency. The constants would be part of the public API/model i.e. use darkfi_money_contract::constants::NullifierK if that's what you want ok ic Then you'd have that easily accessible anywhere for any client and contract There's a possible issue with cyclic dependencies For example if the money contract would want to use a constant that the DAO defines But I dunno if that is a valid usecase even its not something to worry about you can soft link the file or even make another crate True *STAMP* next? :D !next Elapsed time: 16.6 min Current topic: darkirc migration (by frieren) it's time to move to darkirc although be ready to upgrade periodically ++ we should update the book ofc Yep, next week we can have the meeting there, it's been working really solid ++ ++ !next Elapsed time: 2.2 min Current topic: deg usage (by frieren) i fixed the CTRL-C btw dasman can we do a mini tutorial? lag on CTRL-C draoi: nice! Sweet sorry i havn't updated the README about deg lets try it now woah impressive so deg doesn't use config anymore it's like: ./deg -e localhost:26660 i literally just opened it and it works cd darkfi/bin/deg/ ./deg or ./deg darkirc , for default darkirc guys you should try this brawndo, draoi, upgrayedd & others did you run it? also ./deg -r , goes replay mode so it asks the daemon to recreate dag from db log and sends it to deg to browse Didn't try it yet i'm building it now try it now no config needed draoi wdym building, it's python, just open it dasman: so how does replay work? haha yeah no installing requirements etc ah 17:23:51 dasman | also ./deg -r , goes replay mode so it asks the daemon to recreate dag from db log and sends it to deg to browse so for example me and you have now found that i'm missing data from the log what do you send me? which db log? do you mean ~/.local/darkfi/darkirc_db/ ? /tmp/replayer_log.log this is cool! it's in /tmp rn got a KeyError when I did -r though ah ok who creates this file? is it deg? but also recreates db from log and keep it in /tmp as well no the daemon creates it insert 502f04de9f380f100e984aa115d69cacfd91055d0c64f498c92d6cb0acafbd25 f2VEZgAAAAA0BCNkZXYHZnJpZXJlbiZeIFN0b3BwYWJsZVRhc2tzIG5vdyBhdXRvLXN0b3Agb24gRHJvcGOuWCWrYUGkWMHU5REGBBcywikMv/ShSeKNZTOsBZmzAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANAAAAAAAAAA== so is the first field the previous hash, and then the data? correct oh no wait insert current_hash data why do you need the current_hash? i think you can get it from the data data has infos about prev (parents), childs etc.. i think you can probably simplify this file to just a list of data even creating dag Tree like key-value , hash being the key, value being the data anyway nbd for now i'm sure it can get simpllified a lot, but only browsing the replayed dag is enough right? just like deg itself ok so rn you have the replay functionality inside deg going over the rpc but actually we can make this more basic hmm i'm just reading event graph code, it's quite tightly coupled with network stuff actually how basic?, i know i'm recreating the same dag twice lol well i was thinking if we could separate the EventGraph from its p2p dependency then use it directly in python for replaying and then we can write unit tests / do simulations also i see a bug what bug? for channel in channels.iter() { there's 2 of these loops. if the channels are disconnected or slow then it will block the loop it should be async using FuturesUnordered or futures::select!() well not a bug, just not ideal first loop is to ask for tips, the other one is to actaully fetch the events dasman: i'll look at the code and get back to you with more specific suggestions ++ yeah but do you see what i mean? in async code, each loop should be done in parallel not sequential aha ++ dasman: about the suggestions, it's so we can decouple EventGraph from p2p, then we can make python bindings and the replay can happen directly in python (not over jsonrpc) then we can split it from deg and have specific tools for playing around EventGraph (or just use python directly)... it's easier (see the work we did with zkrunner for an example) anyway great job ty !next Elapsed time: 18.6 min Current topic: darkwallet scenegraph and drk/darkirc integration (by frieren) ++ ty brawndo, you were doing drk wasm integration or that was upgrayedd? yes ah ok, try this out: cd darkfi/bin/drk/ cargo run then in another terminal: python client.py it will print some output that is the property tree, and it's how all the components communicate. there are nodes, properties, signals and methods. Check darkfi/bin/darkwallet/src/net.rs:126 for the core API : frieren, here yes apologies... you mean bin/darkwallet ? ah yep i have it symlinked here sec ++ b-b-building btw in general I refuse to run any python repo without requirments.txt the graphics runs in a separate thread so it isn't blocked by the scripts/logic happening in the backend i just use basic python so all good I see zmq imported Nice ah true it works tho : we're having dev meet on the other side What's supposed to be shown in the window? ok I see window with some rainbow areas just boxes, i've been rearranging stuff but more interesting is the python output because that's how the UI is exposed, and probably the nodes will be exposed through this Same @ rainbows i'm not sure how it will work with wasm what have wasm to do with it? : yeah, I was running some errands and had a minor accident (everyone's fine) so had to deal with that. (reading the logs) isn't drk using wasm? yeah, but you will probably use drk as the underlining lib, so you just use its api It will have to later, it's not implemente d yet you shouldn't hanlde wasm at all aha ok is there anything about the api i can see? to get a better idea and coordinate works? bin/drk/src :D which specifically? you won't be able to use it as lib right now, I still have to integrate/modify as proposed by noot in a PR just looking to finish the underlining functionalities first ok but i mean about wasm, how will it work? in what sense? give me an example/flow for context so i make a contract for a DEX There will be a set of generic functionalities that a contract can use s,contract,plugin, and to be able to operate the dex, i make a wasm plugin for drk, right? what does the wasm's api look like? is it introspectable? are there commands? drk has no plugins system We need to define the functionalities, but I'd start with Sign it only supports native stuff now yeah but i mean in the future, what's the aim? or is it not worked out yet? We'll want to do fine-grained stuff wrt. ACLs TBD but in general you just need attach-fee and sign But generally the WASM and the wallet will be exchanging data that needs to be operated on or signed ok ty, just i'm designing sth as well which overlaps quite a bit It's just important that we don't expose secrets to the plugins, but rather the plugins export us the data we need to sign/prove And the secrets stay safe on the host-side ++ So e.g. The wallet would export a list of addresses to the plugin, and the WASM could pick one, and expose some data to sign ok its an ongoing convo, i'll think on this too The host takes it and signs it with the requested key And does whatever requested with the signed data in general the simplest way see it is like: plugin handles everything up to tx creation, then passes tx to drk to attach the fee and sign Through usage/research we'll find the set of functionalities that are needed But first there's Sign and Prove when signing, we usually use ephemeral keys otherwise it's linkable The WASM stuff will have to maintain its own dbs too frieren: Yeah but there's no randomness inside WASM, just on the host So the host has to provide the RNG at least ok i'll think on this a bit, maybe i can mockup some proof of concepts i was trying to think of darkirc we want to be able to simultaneously run different instances It's not the simplest task, as we have to reimplement the runtime again and tune it towards how it's gonna be used in the wallet and also download the data backwards so there's the DAG for today, then the DAG for the day before, and so on which it then can display and insert into the UI But generally consider there's a set of wasm host functions that allow communication Transferring data between the plugin and the wallet ok ty !next Elapsed time: 18.4 min No further topics nice, good meet all tnx frens thanks everyone Thanks gg o/ checkout deg https://agorism.dev/uploads/screenshot-1716217188.png I've done 2 iterations on the wallet-wasm stuff but trashed it because I was dissatisfied with it Will have to lay it out in a doc better ty all thank you everyone, appreciate the flexible approach regarding constants, deg replay mode functionality, and provided darkwallet details. brawndo: yeah HCF: you got an erro on deg? i have been iterating on UI a few times but settling on a good design took me a few days to figure out how fonts work wtf https://harfbuzz.github.io/terminology.html : @draoi pushed 7 commits to master: f4d93104b5: seedsync: refactor to enable reseed on CondVar.notify()... : @draoi pushed 7 commits to master: 85fb7bc684: protocol_seed: only append to greylist if addrs msg is not empty : @draoi pushed 7 commits to master: efb3d05449: doc: adjust log level on session/mod.rs : @draoi pushed 7 commits to master: 007500d8e7: net/test: check all hostlists, not just 1 random hostlist... : @draoi pushed 7 commits to master: a092f0d0eb: transport: non-blocking tcp connect... : @draoi pushed 7 commits to master: eca97916f6: connector: add a stop signal to abort the Connector... : @draoi pushed 7 commits to master: 55e9cc21d0: session: manually stop the connector on slot.stop()... merry chrysler lol : nigheous: put darkirc on the phone \o/ : we need people testing it on unstable conns : my internet kinda sucks dno if that counts pumped for: non-blocking tcp connect : nice : i've hard to restart ircd a few times but darkirc never : adversity make you(r code) stronger dasman yeah : Yep phone works now, nice File "/home/user/darkfi/main/bin/deg/./deg", line 238, in recreate_dag if json_result['result']['eventgraph_info']: ~~~~~~~~~~~^^^^^^^^^^ KeyError ^^ : are the android/docker install instructions up to date? I think I had problems with darkirc on mobile but it was a long time ago HCF: darkirc on latest master? that error is because you reciev an empty dag Yeah I'm on origin/master, fetched today the CTRL-C lag should be fixed now if you want to git pull and rebuild brawndo, frieren: ah a question related to the approaches for zkvn plan, how do we handle find_zs_and_us() being slow? it still is a slow process... (or did I miss the whole point?) I'm of the opinion that it's slowness doesn't matter when it's running in Python : worked for me and then after a format, I did copy the build that fri*ren sent : ok I'll give it another try soon nighteous: And we can extend zkas with code generation that runs find_zs_and_us() : ngl it's relaxin to CTRL-C kill darkirc and it stops near instantly : @foo pushed 1 commit to master: ed4385de0c: fuzz: Add dictionaries, improve README HCF: check if you have /tmp/replayer_log, or try running deg again ah I have no /tmp/replayer_log oh okay, when does the python code run though? I did see some python code in the repo but didn't find where it gets called from bbl dasman, I'll test again another time np, cya o/ nighteous: It's darkfi/bin/zkrunner In src/sdk/python/src/ you can find various Python bindings For example see how python/src/zkas.rs wraps certain types and exports them to Python We can copy the find_zs_and_us() function to darkfi/src/zk/ Then use it wherever necessary oh okay got it : make sure you don't wear your computer out by ctrl-c'ing too often : HCF: yeah i fixed it : ubuntu apt install openjdk version needing bumping : java is such crap nighteous: info on zkrunner tools here https://darkrenaissance.github.io/darkfi/zkas/writing-zk-proofs.html brawndo: i see references to resuming wasm functions after they run but can't see how to do it. do you know anything about that? like using the Metering middleware to limit a function, running it a little, then continuing it later Not sure what you're talking about Where are you seeing such references? https://github.com/wasmerio/wasmer/issues/700 nvm if you don't know, i will figure it out I think this is for WASI WASI is something we likely shouldn't use, as this is what gives it knowledge and abilities to do system IO And isn't sandboxed anymore https://docs.rs/wasmer-wasix/latest/src/wasmer_wasix/os/task/process.rs.html#116-128 hm maybe wasix is indeed sandboxed ah no it's not i can just run them on a threadpool with a metering upper limit gn gn frieren: Why do you want metering even? : @dasman pushed 1 commit to master: e093c5ed57: bin/deg: update README : gm : gm o/ : gm : gm : frieren: Not sure if you saw the msg : frieren: I was wondering, why would you need metering for the wallet wasm plugins? : if they become unresponsive due to buggy code : i have a design which i'll write up later for modules/plugins : ah you mean in order to quit if they start running an infinite loop or such things? : Yeah perhaps that is ok : But the limit should likely be a lot higher than what it is in the contracts : yes ofc much higher : i was originally thinking to poll them all in parallel, but unless i find resume, i'll just run them on a threadpool : With what I've been looking, I don't think the wasm plugins need to keep running : They would just be executed on demand : btw the plugins also have to provide the tx call scanning logic for their contract : Maintain any dbs and such : yeah the wasm functions have an init() and update(event) calls : the modules are untrusted, and plugins can talk with them. The modules can see which plugin is connecting. : init() creates a new instance of the plugin. there are multiple instances. plugins have their own private data but can provide an API to other plugins. : modules run in darkfid and implement things like: tx scanning logic (decrypting notes) : they can also spawn p2p or use event graph (which might be needed by OTC for example) : Should they really be in darkfid and not the wallet itself? : We managed to decouple the node from the wallet : You might be a bit too optimistic about what WASM is capable of : i'll write up later a text, i have some notes gm hi Draco : frieren: darkfid justs provides the json-rpc to retrieve blockchain data, such as blocks txs bincodes etc etc, it doesn't handle any wallet related functionality : all that is moved to drk : brb : b : frieren: around? : @darkfi pushed 1 commit to master: 667035a159: book: extend wallet.md section with specifics on plugins wrt to scene graph : brawndo, upgrayedd: ^ check commit when you can, here's the link (wait until book's rebuilt): https://darkrenaissance.github.io/darkfi/arch/wallet.html#specifics : frieren: I totally disaggre with the last paragraph : making easier setup should never mean build a combine-everything-into-one solution : what's wrong with using dynamic objs? : like the linux kernel : each object should be independed and be able to run on its own : the kernel provides you the ability to control independed modules under a master system : it doesn't "unify" their functionalities : i didn't say anything about unifying functionality : just setting up a dozen nodes and configuring RPC in your wallet is not practical : what about "merge all p2p functionality across apps into a unified subsystem" : it doesn't stop you running each app independently : it just means you don't need a different seed node per app : since there's one overlay network used to lookup the subnetworks (swarming) : it's not that each app uses the same p2p network, maybe my wording was unclear : how is that different from current lilith functionality? : (they in fact must use distinct p2p networks) : a single node exists : lilith is not dynamic : serving multiple networks : i have to configure lilith for that specific app and it is centralized : every seed is "centralized" in that sense : Hey : anyway p2p is not the point : yes lets say you have seed node for edgy app : How do you imagine that a plugin would use the event graph or start a p2p network? : you are liable : I mention the last paragraph since I find it fanctually incorrect : how did you come to that outcome? : brawndo: by accessing /mod/foo : I don : to instrument a node, you only need 2 things: : 't understand : - request reply (calling methods) : - pub sub (listen to notifications/events) : upgrayedd: think about it, the seed node operators are running seeds for specific apps and therefore liable for those apps since it's centralized infra. : How are you planning to do this in WASM? : instead with swarming you just run a single seed node for the overlay network : ++ on p2p swarming : then you don't need to configure the seed for the app, people can just do it automatically : brawndo: i said wasm before, but i think actually it should just be (unsafe) dynamic objs : I'm not sure how the p2p logic would be implemented : Just giving free-reign and executing untrusted code without any sandbox is bad obviously : the event graph for example: we just create a new message, and subscribe to new messages : the seed you are running is not running the app itself, its a match maker for other people to connect, like a torrent tracker, you are not liable for the torrents contents, since you yourself don't have them : (people choose to join this swarm so it doesn't impact the rest of the network) : upgrayedd: ok but also if i make a new app, i need to convince the other seeds to stop their node, add support for my app, then restart their nodes : and i need to run the seeds : dynamic p2p subnets its another thing, I don't dissagree with that, yeah swarming is cool and should be added : I dissagree with the context/statement : maybe yeah I don't get the wording : or I'm biased since I don't find it "difficult" XD : right now we have darkirc managing keys, but when you compare it with taud, they are similar on the p2p side : so it's some app specific logic + generic event graph : They should still be separate daemons : ^^ : why? : Because every super-app failed and so will this one : I don't see the point in sticking all functionality in a single daemon purely for simplifying the setup : It can be done with a script : ++ : Additionally people might want to run separate daemons on separate machines : In Ethereum even just for the blockchain you have separate programs - one which does the consensus, and the other does execution : They communicate over sockets : in eth you don't need to setup a separate node per app : i just don't see users doing that : if you want separate daemons, it could be a loader which spawns them as child processes, proxying control by means of stdin/stdout : It's simpler in Ethereum since at most you're making signatures : which is an alternative to using dynamic objs : Once you get deeper, like doing atomic swaps, etc., you end up running more programs : the loader is responsible for managing the processes, shutting them down .etc : That's a web browser : I also dislike putting the wallet functionality back into darkfid, since it stops you from using the same node for multiple clients : i didn't say that : i said providing a mechanism to manage the daemons in a unified way using a single connection : Don't think under the assumption that everything is running on a single machine : 1. some kind of kernel which loads them as dynamic objs 2. a loader which manages processes 3. a message broker for discovery of the daemons. daemons register their status with the broker. : it doesn't have to be the same machine, you can configure things for different machines : at the very least you need a broker, when i give you an endpoint like mymachine.com:5588, you can introspect the services being provided : What's the usecase you're imagining? : there will be many apps like darkirc, taud, dao, swaps, accounting ledger, markets, dex, and so on : when you add a plugin, it will be insane to have to setup a new daemon and configure an endpoint for your plugin : we will lose all our users. it would be easier if they could configure their node somehow to add the module (where needed), and then the plugin autoloads it from our default endpoint : add a plugin where? : anyway i'm not concerned with this rn, the main stuff in the text is the stuff about plugins : plugin in the wallet : the do-it-all gui wallet you mean? : it's completely standard, see https://darkrenaissance.github.io/darkfi/arch/wallet.html : the apps need to talk to each other, for example, attaching a fee : There likely needs to be a set of base functionality provided by the wallet which plugins then use : Like coin selection, fee attaching, etc. : These are base network things : The plugins would have to implement logic for their specific contract stuff and how to build txs there : Regarding comms, for example in the case of OTC, this needs to be done on a different layer : As it exposes a different type of resource - here you start needing a network likely : did you read the doc? : /plugin/money provides those functions : you forget that apps are composable. the dao can call any function : I'm not arguing that : I'm saying that there's different types of things to do and they're not all-in-one : In the case of OTC, you have the thing that builds the tx, but you also maybe need networking to match your swap : yeah that's why i said there's backend - everything to do with network, doesn't use keys, .etc, and then there's plugin - everything using keys : Something else needs to do the networking stuff : yes that's the backend, hence why i said taud and darkirc can be split from event_graph, the backend is the same for them : for example with taud, that stuff can actually just go directly in tau and it could talk with eventgraphd : But why pollute the p2p network with multiple types of messages one of which is irrelevant for a specific app? : what you are describing is decoupling the underlying modules(like p2p, eventragraph) from the app themselvs : and deploy them as seperate daemons which create app specific instances dynamically : i'm not saying to pollute p2p, i'm saying to separate the backends : well i didn't say seperate daemon, i actually argue it should be the only daemon running them as modules : that means darkfid will need p2pd to operate, darkirc p2pd and eventgraphd, etc etc : but if not modules, then either processes or at least provide a broker for discovery of the services : you could choose to keep p2p in darkfid, but for example darkirc and taud both manage keys : I'm giving you a "translation" of what you are describing into system terms : yep true : systemd all over again :D : otc, darkirc, taud, dao - all use event graph but require separate daemons to operate. : and? : i mean darkirc obv is a daemon for irc, but in the ui case, it doesn't actually require a specialized daemon, just access to the darkirc event graph : that's what i mean by pushing the functionality into a plugin and using a unified backend for all those usecases : making darkirc(or any daemon for that matter) to also operate as a lib is a different thing : like in the ui you can create a darkirc struct/instance : and manage it it like that : what you are describing, is decoupling the apps from their respective backend, in order to be able to server more apps with the same backend : yes, and potentially providing a unified "kernel" to manage those backends : to showcase what I'm describing, check bin/darkfid/src/tests : and if not a kernel, then a loader, and if not that, then a broker for discovery : there we genearate couple of node instances for the test : so the test in that case is the overlord manager of said instances : anyway this is not the main section, this is just an addendum : the main part is the stuff about plugins: https://darkrenaissance.github.io/darkfi/arch/wallet.html#specifics : each plugins talk to modules (which are nodes rn) : when the module interfaces are loaded, they have access permissions for them, so they can whitelist certain plugins or allow all : btw we can already support the multiple apps single backend paradigm : at least for p2p : how? i don't think we can : as long as apps don't have same message names etc, you can attach all their protocols into a single p2p : ah no that's not a good idea : bingo :D : the p2p network will be unstable and fragment : you misunderstood tho, i'm not saying to put all p2p in the same network : then what? : i'm saying (for example) that event graph as used by most apps doesn't use custom protocols. : the daemons don't need to store keys or do anything. the app specific stuff could be in the client : yeah thats why we decoupled darkfid from having wallet stuff and moved them all into drk : drk is the client of darkfid : you could do the same for darkirc for example by creating a decrypt script for weechat : then weechat can handle the keys itself : well for weechat we still need a daemon to handle the messages : why a daemon? you just need an encrypted messages handler, which can probably be a python script : for which you configure through fses : s,fses,fset : ah you're saying the eventgraphd just publishes the messages : which in turns "breaks" the universal compatibility with irc compliance clients : since each client needs to have its own decrypt handler : frieren: yeah it just "broadcast" data to who ever listens : We had that, but it didn't really work out : and they then handle them how they want : (Weechat script) : brawndo: yeah I'm not saying its a good model : just describing whats the "proper" way to hanlde what frieren is describing : I gave the darkfid-drk decouple example for that reason : darkfid is just a "replicator" of current network state : it doesn't operate over the data : drk holds the keys and receives the data(block) and then parses them and handle them : semi-exception is when running darkfid in mining mode, where you pass the rewards address, but even then, the key to create the miners/coinbase tx is randomly generated : there's persistent nodes running some process to sync data from the network (darkfid downloading/verifying blockchain, event graph, ...) : there's scanning done by wallets on startup (check for recv'd payments, decrypt DMs, .etc) : scanning downloads the updates from persistent nodes : lastly wallets can interact with nodes to push data to the network, or communicate with other nodes .etc : -- : stop calling them wallets and call them clients : seems like these are the 3 main usage patterns : ok clients : wallets just means they have keys : what you really want, is a gui to manage/hanlde all the corresponding daemon-client combos : a module/plugin in that case is exactly that, the combo of a daemon and a client : for example for the gui to support both blockchain and irc, it should encorprate the pairs : like init a darkfid daemon and a drk client, and a darkirc daemon and its client(which in your case is native) : so the app really manages just combinations, a glorified task manager :D : i think maybe there's just 2 things: darkfid and event graph : depends : is the event graph gona handle both darkirc and tau for example? : yeah i will write plugins for those in the UI : You need separate instances of the event graph for darkirc and tau : we don't need to run darkirc and taud : Why would you mix them? : in the case of drk, since its cli yeah you just need to manage darkfid, and simply trigger the drk api through the gui : i'm not saying to mix them : ++ on not mixing them, we are reverting back to decoupling backends to server multiple apps convo : everytime i talk about spawning p2p, you all assume i'm talking about the same p2p network - i'm not : hence why I mentioned daemon+client combos : i'm talking about a daemon which can spawn a new subnet and proxy the traffic to you : You're saying about running one eventgraph instance : i didn't say anything about instances : But what's the practical difference between 1 eventgraphd running on two p2p networks vs 2 separate event graphs on two p2p networks : brawndo: I think they are saying an overlay manager of instances : When now you need the eventgraphd to proxy the traffic correctly : like having a masterp2p which simply spawns and handles p2p instances : the practical difference is if i make otc, dao, swap, accounting, taud, .etc i don't need multiple redundant daemons running : and those instances are passed to the actuall daemon : it's just a comms mechanism : I think this problem needs to be fragmented into smaller ones because we're not landing on the same page : frieren: gui is the single daemon running : gui is not a daemon : it can start and stop randomly : guid then? XD : as I said again, you are probably refering to an overlay master daemon, which is responsible to "spawn" instances given by its client : those instanses are not new daemons, just stuff running by the master daemon : yes for event graph : i think this + darkfid handles 90% of major use cases : wait : there is more : do you want to use same backend for deployed stuff in that master daemon? : aka darkirc and tau use the same event graph db? : not the same db : aha : another thing in the future will be the dht : if its not the same db, then what you are describing is as I said a master overlay to handle multiple "instances" of daemons : since you will "spawn" a new eventgraph object for darkirc and taud : so in reality, you just create a new darkirc "daemon structure" to run and manage : you create less daemons to manage : wdym by that? : no need for taud, daod, swapd, .etc : yeah everything is under masterD : i think it's probably less code and less mgmt tbh : you just define the config of the underlying daemon/process in the client : less code no I dissagree : since the masterd is a glorified tmux/screens script : yes ofc its less code : you don't have to add event graph to every single daemon and add all the daemon boilerplate : you don't even need a daemon : *the daemon : describe how would that look in reality : lets say I want to add darkirc and tau : how would that look in terms of steps : lets do tau, tau on init, gets event graph to do sync, then when using client, it will download data, unpack it and do whatever's needed : frieren: are you writting something? : ah no i'm kinda tired, we just all zoomed in on sth really minor when i was more interested in the other stuff : this part is not relevant to my work rn : I dissagree on it being minor, we are talking about the backbone of the design : at least explain what you mean by daemon boilerplate : info!("Instantiating event DAG"); : info!("Registering EventGraph P2P protocol"); : info!(target: "taud", "Starting P2P network"); : info!("Syncing event DAG (attempt #{})", i); : (in taud) : and it continues : Again the scope of this is too big and it should be more smaller things : tbh each taud workspace should use a different p2p network : How about we start with p2p swarming first? : That will give us a clear separation of the p2p subnets : if by boilerplate you mean the actual code, just create a macro like async_daemonize : sure, but as i said rn i just wanted clarity on the plugin arch : frieren: It's hard because we're mixing things : On one side you have the sandboxed contract stuff, and on the other side you have system IO : ++ they overlap a lot : They shouldn't be : wdym overlap? : dao plugin needs to call money but also coordinate with over dao members over event graph p2p : *other dao : I would argue they're completely separate things/scopes : why are you mixing contract extensions(the dao plugin) with infra? : The former is about interacting with the contract, and the latter is people : that app mixes different functions : namely: calling another plugin, using event graph and then pushing tx to darkfid : presenting data from multiple sources is not mixing functions : or handling in that case : adding a wallet plugin, aka a new contract that does some stuff, is not mixed with social coordination, or wallet coordination : they are distinct different processes, that happen to be presented under a unified ui : So let's say "calling another plugin" is interacting with the contract. For this, we kinda settled on having WASM plugins that use some standard wallet API : its not interacting with the contract : Pushing tx to darkfid to me sounds like a native wallet thing, done at the end when a certain operation is finished : Yes it is : that means calling smart contract functions on-chain : s,contract,& client API, : Better? : Now what is unclear is "using event graph". To me this is weird because we come on the area of doing system IO : yes, the plugin provides an API which is callable by other plugins for this purpose : What would be the architecture of these kind of plugins/protocols? : How do we keep it secure? : i wrote a doc, did you read it? : The doc is sparse and doesn't answer my questions : https://darkrenaissance.github.io/darkfi/arch/wallet.html#plugins : You're just saying it's scriptable from any language : There's nothing about security there : > The scene graph then applies access restrictions depending on the ownership semantics of properties and methods (think like UNIX users and groups). : So is it still WASM? : Basically, what's stopping me from running rm -rf / : yes it's sandboxed, whether wasm or not, see https://codeberg.org/darkrenaissance/darkfi/src/branch/master/bin/darkwallet/src/py.rs : brawndo: I think the differentianting point is "Each app has a separate plugin", frieren: does that mean by dao plugin having an app which consist of a "darkirc" along with the corresponing contract? : frieren: How is it sandboxed? I can't see : no, it may have access to darkirc or other daemons : why would it need access is the question? : brawndo: it can be sandboxed by wasm or pypy .etc : what does "using event graph" means : (X) Doubt : how dao plugin "uses event graph"? : thats the question : how or what for? : what for : how, by accessing /mod/event_graph : what for: how do you use a dao otherwise? do you copy paste base64 strings? : or for example swaps, you want to list orders : at some point, you need to access data on a network : the reasoning is ux then? : for the dao, you describing coupling access to the dao coms with the tx generation stuff, so users don't have to do a copy paste from darkirc to the dao client, did I understand properly? : its not just ux, you cannot do anything. you cannot have an orderbook or a marketplace : 1) What : that doesn't make any sense : the problem is you are trying to couple everything under a do-it-all app and minimizing backends : i mean you could say that about any multimedia productivity software like blender, gimp, video editors, .etc : being scriptable does not mean do-it-all : None of those have any security : The idea with using WASM was for safety and running untrusted code : We can't have that with stuff that does system IO unfortunately : you have to distinguish plugins (trusted) and modules (untrusted) : Yeah but you won't be running one without the other : modules are accessed by /mod/foo, and they have access permissions set : so they can limit how they're used by certain plugins : plugins are sandboxed, so they can't break this access pattern : morever, this applies in general to everything such as window events, since it's a generic mechanism : Those access patterns are valid only for sandboxed plugins : Native code can run syscalls : But frankly there's no way around it : yes correct, you should run any daemon you don't trust : *shouldn't : Right : firefox used to have untrusted XUL which meant you could instrumentalize your browser (such as reloading a local webpage when it changed), but they replaced it with chrome's addon arch which is fully sandboxed : now you cannot do that thing and addons are much more limited in scope, so they really killed their ecosystem : Yeah : okay so if we take the path of "don't run code you don't trust" : Then we don't have to use wasm at all for plugins : plugins should be sandboxed : but the backends should not : backends/modules/daemons (used interchangeably here) : Why is that? : plugins can be fully sandboxed, and they are apps that users download : But they're useless/less-useful without the module : a plugin when installed requests the permissions it needs: Vec<(node_path, ObjectType, Action)> where ObjectType is Property, Signal or Method, and Action is Read, Write, Execute : the UI can decide how it wants to display this info : But what about in practice? : I'm not worried about the theory at all