02:48:00 | justanotheruser: | justanotheruser is now known as jau |
02:48:15 | jau: | jau is now known as justanotheruser |
08:05:15 | holmes.freenode.net: | topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja |
08:05:15 | holmes.freenode.net: | Users on #bitcoin-wizards: andy-logbot AaronvanW damethos wumpus jtimon CoinMuncher c0rw1n wiretapped Grishnakh austinhill NewLiberty kgk_ koshii TheSeven RoboTeddy alferz woah moa mmozeiko epscy justanotheruser jgarzik todaystomorrow Dr-G2 wizkid057 tromp_ Nightwolf rdponticelli coinheavy grubles berndj forrestv asoltys [d__d] livegnik hashtag_ Starduster reick SDCDev tacotime d4de^ artilectinc nsh Dyaheon DoctorBTC go1111111 Sangheili lianj nuke1989 Fistful_of_coins |
08:05:15 | holmes.freenode.net: | Users on #bitcoin-wizards: myeagleflies SomeoneWeird Krellan shesek CryptOprah altoz Max_H3adr00m samson_ Emcy Meeh nanotube gavinandresen mortale emsid Iriez [Derek] comboy burcin gsdgdfs NikolaiToryzin Anduck HM arowser warren Guest45377 Logicwax coryfields_ [\\\] pigeons xenogis Luke-Jr sipa gmaxwell Taek kanzure iddo_ _2539 LaptopZZ_ lnovy dgenr8 Adlai hguux zwischenzug spinza Alanius K1773R grandmaster2 phantomcircuit kumavis heath Graftec a5m0 andytoshi bobke |
08:05:16 | holmes.freenode.net: | Users on #bitcoin-wizards: petertodd BigBitz waxwing pi07r fanquake espes__ BrainOverfl0w ebfull kaene digitalmagus CodeShark smooth BlueMatt Graet LarsLarsen @gwillen sl01 copumpkin dansmith_btc artifexd michagogo tromp bbrittain weex gribble Kretchfoop Muis @ChanServ phedny lechuga_ abc56889 throughnothing harrow fluffypony mr_burdell Apocalyptic kinlo midnightmagic starsoccer so ahmed_ Gnosis Keefe pajarillo roasbeef ryan-c otoburb helo [Tristan] TD-Linux catcow |
08:05:16 | holmes.freenode.net: | Users on #bitcoin-wizards: danneu UukGoblin poggy coryfields kgk btc_ crescendo amiller Eliel optimator zibbo_ jbenet mappum MRL-Relay hollandais yoleaux firepacket jrayhawk_ Hunger-- |
08:36:38 | grandmaster2: | grandmaster2 is now known as dansmith_btc2 |
11:34:07 | Aquent_: | Aquent_ is now known as Aquent |
11:40:58 | Aquent: | Aquent is now known as dewm |
11:54:05 | dewm: | dewm is now known as mewn |
12:02:50 | mewn: | mewn is now known as Aquent |
13:00:37 | hearn__: | hearn__ is now known as hearn |
16:49:19 | Taek: | on transaction ids and malleability, would it be reasonable for transactions to have multiple ids, one for each signature added to the transaction? |
16:50:04 | kanzure: | so something like a signature hash? |
16:50:20 | Taek: | yeah |
16:50:41 | Taek: | since each signiture signs one of the inputs, collisions would only be a problem within the transaction |
16:50:57 | Taek: | and if each hash points to the same transaction, collisions are irrelevant |
16:51:35 | Taek: | this allows signers to have a guaranteed way to id a transaction, even if they don't know what the full transaction will look like (they hash their own signature) |
16:51:42 | samson2: | samson2 is now known as samson_ |
17:20:09 | gmaxwell: | you're free to do whatever you want... but this seems stupid to me, since for any application where that would be useful you could just track which input is being spent... which is more general and is already what some things do. (e.g. bitcoin core) |
17:22:15 | amiller: | okay i'm trying to think through MMR |
17:22:22 | amiller: | i have to think about how updates work again. |
17:22:33 | amiller: | i was starting to get more exited bout this because if you only do insertions, |
17:22:47 | amiller: | then to maintain the proof fo ra single coin you only need to receive O(log N) updates |
17:22:53 | amiller: | most updates don't affect your coin |
17:22:56 | amiller: | er your proof |
17:23:08 | amiller: | i'm referring mainly to gmaxwell's explanation here https://bitcointalk.org/index.php?topic=314467.0;wap2 |
17:23:19 | amiller: | but if there are removals, |
17:23:38 | amiller: | then do I need to process potentially *every* update in order to check if my mmr branch is affected? |
17:32:58 | gmaxwell: | correct. Or someone does, rather, who is willing to provide proofs for you later. |
17:33:14 | amiller: | yeah ignoring that part |
17:33:40 | gmaxwell: | but this is nice in that you can pay for their services... e.g. I could give you a proofless transaction and leave it up to you to handle finding the proofs. |
17:33:53 | amiller: | that applies just as much to the balanced binary tree thing as to MMR though |
17:34:10 | amiller: | (not that that's bad or invalidates MMR just tryin to find whats the diference between these) |
17:34:25 | amiller: | one thing MMR gives you that the tries don't is that A proof and B proof can both be used to create an A+B proof |
17:34:41 | amiller: | that's important unless you have one transaction per block |
17:34:57 | amiller: | or want to do interaction with a miner to build up a block whichwould suck |
17:34:58 | gmaxwell: | A difference is that you can track only the data that matters to you, you must still watch all updates but it's only a small amount of data to track. So you could, in theory, have no nodes that keep all the data at all. only wallets that track their own coins. |
17:35:08 | amiller: | that's true of balanced binary tree too |
17:35:11 | gmaxwell: | amiller: A+B proofs can work in anything that doesn't do level compression. |
17:35:18 | amiller: | if you have a branch for your coin in a balanced binary tree |
17:35:31 | amiller: | you can keep maintaining that branch just by looking at the updates that come bya |
17:35:34 | amiller: | come by |
17:35:54 | gmaxwell: | amiller: but you cannot insert a new coin into the tree without potentially knowing all of it. |
17:36:01 | gmaxwell: | so you can't spend with just your data. |
17:36:03 | amiller: | ah yeah that's true |
17:36:04 | amiller: | okay |
17:36:18 | amiller: | what about just having an insertion ordered binary tree |
17:36:22 | amiller: | that's a little more obvious than MMR |
17:36:33 | amiller: | what is it that MMR gives you beyond that? |
17:36:36 | gmaxwell: | thats what a MMR is. |
17:37:29 | gmaxwell: | The bigger issue with this space of ideas is that it's seemingly making the wrong tradeoff: carrying proofs around with transactions trades storage for bandwidth, but we're way more bandwidth constrained and seem likely to be forever. |
17:37:39 | amiller: | i thought there was more to do it |
17:37:50 | amiller: | i'm not going to call it MMR if it's just a merkle tree |
17:38:51 | gmaxwell: | amiller: it's an insertion ordered binary tree, which is a little vague. Doing the incremental updates requires a bit of footwork on the leading edge. |
17:39:10 | gmaxwell: | but I'm pretty sure anyone else would just call it an insertion ordered binary tree. |
17:39:13 | amiller: | i guess, i think that's obvious though |
17:39:14 | amiller: | okay |
18:06:42 | davidlatapie: | davidlatapie has left #bitcoin-wizards |
18:47:37 | petertodd: | amiller: incidentally, after looking at the rules for a MMR in detail, I suspect just using a sparse insertion-ordered binary tree makes more sense in terms of complexity of implementation |
18:47:47 | amiller: | yeah |
18:47:59 | petertodd: | amiller: the logic to validate a MMR proof is maybe 10x more complex then a binary tree proof |
18:48:40 | petertodd: | amiller: MMR's are nice for timestamping because if you just need to follow a path back they keep proof sizes shorter in certain cases, but I don't think that advantage outweighs implementation complexity for cases where you need to prove the positional index of the comitted data |
18:49:07 | petertodd: | amiller: of course, I'm the kind of guy who likes Python, so... :) |
18:51:11 | petertodd: | amiller, gmaxwell: re: bandwidth, remember that you can have a dual tradeoff: UTXO set for recent stuff, and only use the TXO set + bandwidth-consuming proof for the occasional spend of old coins. That still keeps the UTXO set limited in size, but without incurring a bandwidth cost in the common case |
18:53:59 | sipa: | petertodd: so you're a "full node" which has some defined subset of the UTXO set available |
18:54:15 | sipa: | petertodd: you receive a block with only spends from that recent UTXO set |
18:54:26 | sipa: | how do you verify its TXO set commitment is correct? |
18:54:51 | sipa: | it seems to me that you need sort of a partial TXO set + index for it, rather than just the partial UTXO set |
18:54:58 | sipa: | (arose from a discussion with gmaxwell about this) |
18:55:12 | petertodd: | sipa: yeah, so the way I'd do it, and I think I suggested this before, is to essentially think of spending a really old txout as you provide the proof the txout is still unspent as a way to "unarchive" that txout and put it back into the live UTXO set |
18:55:58 | sipa: | i'm talking about the opposite thing: spending a _recent_ UTXO does not give you a way to update the TXO set |
18:56:00 | petertodd: | ok, yeah, you definitely do need a index and whatever else data to be able to calculate TXO set commitments for the recent part of the set that's still in the "live" part of the UTXO set |
18:56:08 | sipa: | right, ok |
18:56:36 | petertodd: | basically that's just saying "the most recent X months/years worth of UTXO is something everyone must have" |
18:57:26 | petertodd: | how hard you want to enforce that is an interesting question - can be done on a "soft" level by just rejecting blocks on the P2P layer |
18:57:51 | petertodd: | equally, you can account for bandwidth by treating the associated TXO unspentness proofs as part of the blocksize - which is a soft-fork! |
19:09:16 | sipa: | petertodd: but, as a transition, you would need to start with nodes having a TXO set with all unspent entries indexed |
19:09:34 | sipa: | which i think will be significantly larger than the utxo set itself |
19:09:46 | petertodd: | sipa: yup, so basically you go UTXO -> TXO -> pruned TXO |
19:10:19 | sipa: | the basic data structure you have is a partial merkle tree, and the only operations you need is append and merge |
19:10:25 | sipa: | and it can be indexed or not |
19:10:49 | petertodd: | as for high much larger, that's a speed/size tradeoff - essentially the merkle tree can have parts recomputed on the fly to update rather than saving all non-pruned nodes in the tree |
19:19:34 | gmaxwell: | ignoring all the skip the proofs if you have the data stuff, and usecase flexibility .... I think one thing I recently realized that I'd missed before (and I don't think anyone had pointed out) is that insertion-STXO proofs seem like they'll likely be better with respect to bandwidth than search-UTXO even though the STXO database is much larger because stxo needs only proofs for inputs, while utxo needs proofs for outputs and ... |
19:19:40 | gmaxwell: | ... 2*log(x) is larger than log(n*x) for many values of n,x. |
19:56:30 | tromp: | like x>n>=1 |
20:07:23 | gmaxwell: | hah. indeed. |
20:44:25 | andytoshi: | lol tromp, idk why that was so funny |
21:05:42 | amiller: | this is a nice collection of papers i've never relaly ooked at in directory listing view before http://diyhpl.us/~bryan/papers2/bitcoin/ |
21:06:52 | nsh: | braindumping is one of kanzure's many features |
21:08:27 | kanzure: | i exist mostly to turn oxygen into code and bookmarks |
21:09:05 | kanzure: | i also recommend http://diyhpl.us/~bryan/papers2/incentives/ and http://diyhpl.us/~bryan/papers2/security/ |
22:07:44 | amiller: | oh, got it |
22:11:12 | gmaxwell: | s/and bookmarks/, email, and bookmarks/ |
22:11:59 | kanzure: | also i log most of everyone i meet in this cross-referencing system so that i can remembr who you jerks are |
22:12:04 | kanzure: | (tagged conversations, etc) |
22:12:10 | kanzure: | *remember |
22:27:07 | nsh: | web-of-jerk technology is surprisingly more sophisticated than web-of-trust technology |
22:30:26 | kanzure: | why is it surprising? |
22:34:36 | nsh: | compliance purposes |
22:34:39 | justanot1eruser: | what is web-of-jerk? A joke? |
23:23:45 | justanot1eruser: | justanot1eruser is now known as justanotheruser |
23:27:53 | gmaxwell: | Anyone have a citation for hub-and-spoke micropayment-channel networks, we've discussed them many times but has anyone written up a description? |
23:28:30 | BlueMatt: | * BlueMatt has written up long design docs for this in email, but never public |