00:00:34petertodd:beamlaser: I've got a half-finished writeup on it
00:01:02petertodd:beamlaser: tl;dr: is it's a centralized system, in that the centralization point can at best DoS you; at worst put you in a position where you're unable to prove you're honest
00:01:57petertodd:beamlaser: seems to me to be an attempt to shoehorn a "appcoin" into something that'd work better without one
00:02:06beamlaser:i thought it was decentralized using a dht and a quasi blockchain
00:02:34petertodd:yeah, but who signs that blockchain?
00:02:48petertodd:at best that'll be a proof-of-stake system, which is a centralization point where none is needed
00:03:09beamlaser:you are right, is there a better way to implement it?
00:03:27petertodd:there's other issues, like how it appears to be using "linear" blocks when it should be using merkelized binary prefix trees to commit to facts
00:03:57petertodd:beamlaser: yeah, basically let people run their own factom chains, controlled by them, and directly publish the hashes of those chains in the blockchain - no third parties requried
00:04:23petertodd:if you need scalability, pick a third party and/or get together with other people who need factom that you trust and work together to build that chain
00:04:45petertodd:any one person in the "subchain" can DoS attack the whole thing, but at least you get to pick who you trust
00:05:01wallet42:hi what is the earliest estimate OP_CHECKLOCKTIMEVERIFY could be live?
00:05:46petertodd:wallet42: yesterday on viacoin testnet, lol. few weeks for viacoin maybe? absolute minimum of months for bitcoin?
00:05:48beamlaser:interesting i might pick it up then peter todd.
00:06:14petertodd:wallet42: may want to have more than just CLTV in that softfork too - see earlier #bitcoin-dev discussion re: greenaddress
00:06:22petertodd:beamlaser: pick it up as in work on the problem?
00:08:21wallet42:the proposal doesnt seem to be very controversial?
00:09:31beamlaser:i will post my progress here
00:12:30petertodd:beamlaser: cool - you may end up duplicating other's work, but at worse you'll learn a lot :)
00:12:48petertodd:wallet42: nope, OTOH not many (any?) have implemented apps against it, so the design might not be right yet
02:19:21NewLiberty_:NewLiberty_ is now known as NewLiberty
02:27:33husebyAFK:husebyAFK is now known as huseby
02:36:56omni:omni is now known as Guest38509
03:23:33kanzure:"Saying math can't prevent double-spending is near equivalent to saying it cannot be done."
03:23:38kanzure:what?? that doesn't sound true.
03:26:50gmaxwell:There was some comment made on bitcoin-development recently where someone made some crazy argument that bitcoin had cryptographic security against double spending that I was surprised no one corrected (but I didn't wade into)
03:28:04gmaxwell:Math can mean a lot of things. It could mean something was sound e.g. had absolute security, or maybe it means something has cryptographic security (intractable unless P==NP, ideally, though most cryptographic assumptions are not even that strong). Certantly it doesn't mean "cannot be done" in any way shape or form.
03:34:02kanzure:i suppose it depends on what your definitions are, but generally a disconnected system on one side of the planet cannot know that another spend happened on the other side of the planet 1 ns ago
03:34:20kanzure:no matter how much math you use (as far as i know) (cc greg egan)
03:37:03gmaxwell:So, for example, one approach that doesn't work via punishment is quantum cash. Say you have a set of stored qbits which have a public identity and you can only sign with them once without ruining their state. (no one knows how to do this from an engineering perspective, but apparently there is some scheme of apparently physically possible operations that effectively give you that construction)
03:37:09gmaxwell:Is that 'math' ?
03:37:32gmaxwell:it's certantly not 'math' in a convention sense, it's a physical limit, though one that may be impossible to compromise if constructed perfectly.
04:01:52jps_:jps_ is now known as jps
04:29:30omni:omni is now known as Guest58391
04:32:35maaku:in my training as a physicist I became suspect of any appeal to math
04:32:58maaku:mathmatically convenient models tend not to represent reality
05:05:17irc.freenode.net:Disconnected from irc.freenode.net (ERROR :Closing Link: wpsoftware.net (Ping timeout: 245 seconds))
05:06:32wolfe.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
05:06:32wolfe.freenode.net:Users on #bitcoin-wizards: andy-logbot Guest47784 fanquake c0rw1n_ rusty freewil coiner TheSeven Iriez artifexd_ aburan28 Dr-G3 NewLiberty GibsonA hashtag_ Shiftos atgreen Adlai super3 zooko MoALTz waxwing_ moa jtimon iambernie gues nuke1989 hashtagg grandmaster SDCDev gribble Guest98671 Luke-Jr bosma prodatalab kumavis_ shesek maaku SubCreative Greed Aquent Graftec v3Rve Guest2104 nsh coryfields bsm117532 ahmed_ isis jgarzik BigBitz coutts BlueMatt cfields Anduck berndj
05:06:32wolfe.freenode.net:Users on #bitcoin-wizards: Graet jaromil Guest29370 Eliel gavinandresen nanotube sl01_ PaulCapestany lclc_bnc hguux_ spinza bobke_ luny AdrianG_ Muis kanzure jbenet mappum lnovy michagogo wizkid057 OneFixt digitalmagus gwillen kobud Grishnakh samson_ kyletorpey CryptOprah btc__ ryanxcharles DougieBot5000 Starduster wumpus [\\\] MRL-Relay Dyaheon- phantomcircuit stonecoldpat morcos pi07r eric dansmith_btc tacotime NikolaiToryzin mkarrer ompik [d__d] Emcy toddf heath
05:06:32wolfe.freenode.net:Users on #bitcoin-wizards: go1111111 Transisto bbrittain huseby DoctorBTC Apocalyptic EasyAt tromp espes___ null_radix hollandais midnightmagic starsoccer danneu catcow TD-Linux ryan-c roasbeef amiller smooth optimator_ mmozeiko Alanius epscy JonTitor fluffypony warren Krellan fenn LarsLarsen jchp nsh_ asoltys_ K1773R nickler_ a5m0 @ChanServ Meeh comboy btcdrak eordano Nightwolf gmaxwell pigeons andytoshi lechuga_ warptangent Fistful_of_Coins HM2 Guest38445 iddo kinlo
05:06:32wolfe.freenode.net:Users on #bitcoin-wizards: azariah yoleaux Logicwax s1w sneak harrigan sipa livegnik burcin poggy Taek throughnothing petertodd crescendo so helo Keefe phedny BrainOverfl0w harrow
05:12:42maaku:(_of course_ you use math to solve problems, but math can also not reflect reality)
05:19:46moa:maaku: correct, except within a consistent set of approximable axioms using reality-based theories
05:21:19maaku:moa: getting OT, but no that's my point. the basic axioms of a reality-based model tend not to be driven by factors which make the math easier
05:22:10maaku:there are almost no examples of math-based exploration of theoretical physics which results in new fundamental theories
05:23:06maaku:what actually happens is that some experimental evidence results in more complicated math at the fundamental level
05:24:25maaku:the theorists then work to find a better, more succinct description of reality that matches the experiment, but it rarely reaches the simplicity of the old regime
05:24:49maaku:reality tends to complicate things
05:40:35rusty:maaku: so, I've hacked up a simulator for compact SPV paths across blocks, using various different tree topologies.
05:41:58rusty:maaku: running it 20 times now, with different seeds, as there's a great deal of variation in path lengths.
05:44:42maaku:rusty: sipa did a good deal of work on this
05:45:11maaku:he found one construction that was particularly interesting
05:46:17maaku:but I also don't think there's been adequate work on what the metric should be (path to genesis is what sipa used, but I don't think that reflects mean path from A to B for arbitrary distances)
05:47:42rusty:maaku: That's a good question... fairly easy to test though.
05:48:52maaku:well unless there's an analytical solution, evaluating each scheme is N^2 which gets a little expensive
05:49:36maaku:so no scheme I've seen provides better than a small constant factor improvement over committing to all blocks
05:50:08maaku:and there are benefits to committing to every block, so that is very likely the path that will get implemented
05:50:49maaku:but we're talking about a soft-fork consensus rule, so someone needs to do the due diligence and make sure that there really isn't a vastly more efficient scheme we could be using instead
05:51:00rusty:maaku: agreed.
05:51:24rusty:maaku: I'm assuming all blocks. I tried to implement your " merklized heap filled by pushing nodes down the right-hand side" suggestion, hope I got it right.
05:53:10maaku:rusty: ah i misread you then. I thought you were suggesting to commit a single back-link which is a function of the block height, which is what sipa looked at
05:53:23rusty:maaku: and compared it with a breadth-first ordering, and a naive array-to-tree (no internal values) ordering.
05:53:40maaku:ok cool this is interesting new work
05:54:53maaku:rusty: what paths are you using as the evaluation metric?
05:56:10rusty:maaku: OK, so I generate N (currently running w/ 1M) random u64s, with rule that you can skip back up to -1ULL / value nodes.
05:56:36rusty:maaku: then I calculate optimal CSPV path back to genesis from "block" N-1.
05:57:24rusty:maaku: for each node on that path, I build the tree of prevs, and figure the depth of the prev I want.
05:58:16maaku:so you're using path to genesis?
05:58:23rusty:maaku: yeah.
05:58:50rusty:maaku: easy to calc path to something else and see what that does to results.
06:00:10rusty:maaku: now, you previously said proof for tree with internal values is shorter. I think it's actually twice as long, right? (Well, 1 for top node, 3 for second row, etc).
06:00:59rusty:(Actually, I think you said depth was 1 less, which is true...)
06:01:37rusty:Anyway, results of calculating proof lenghts for N=1024*1024 over 20 runs:
06:01:47rusty:naive: proof hashes 367-2863(1416.4+/-7.8e+02)
06:01:47rusty:ideal: proof hashes 315-2143(1031.7+/-5.6e+02)
06:01:47rusty:maaku: proof hashes 499-4007(1860+/-1.1e+03)
06:02:23rusty:I think this is because short SPV jumps (which are great with your tree) are actually really rare, since we select for long ones.
06:08:24maaku:ideal is what, breadth first ordering?
06:08:51rusty:maaku: yeah. label might be ambitious, but it's certainly *expensive* if not ideal :)
06:22:21rusty:maaku: https://github.com/rustyrussell/rusty-junkcode test-trees.c . It's not polished, but it Seems To Work(TM).
06:35:43maaku:rusty: actually the tier nolan right-path-compression trick might make naive array the best...
06:36:05maaku:rusty: did you account for the approx double depth in the breadth-first case?
06:36:43rusty:maaku: yep, same formula as the 'maaku' case. What's " tier nolan right-path-compression"?
06:37:22maaku:rusty: in the way the bitcoin constructs merkle trees, you don't need to store 'empty' right branches on the right-hand path
06:37:48maaku:since if there's an odd number of hashes, bitcoin duplicates the last hash
06:38:03maaku:so if the last hash is the path you are going down, you don't need to store that hash twice.
06:38:19rusty:maaku: in effect, naive does that already.
06:38:48rusty:maaku: it builds the tree in [power-of-2] [remainder] parts.
06:38:54maaku:only if you're accounting for it in your path lengths
06:39:24maaku:in the extreme case the 2^N + 1 -th, you need to store only one untaken branch to reference the last element
06:39:49rusty:maaku: yeah, I actually build the tree to measure the path length. It's naive, but probably less buggy that way.
06:39:59maaku:the serialization would be the left branch for the root, then the 2^N + 1 -th item, no matter the depth of the tree
06:40:01rusty:maaku: exactly. there's even a pretty diagram of that in the source.
06:40:35maaku:ok i'll look at the source but what you're saying here isn't reflecting the optimization i'm talking about
06:42:25rusty:* ^
06:42:26rusty:* / \
06:42:26rusty:* /\ \
06:42:26rusty:* / \ \
06:42:26rusty:* / \ \
06:42:26rusty:* /\ /\ \
06:42:26rusty:* 0 1 2 3 4
06:42:45maaku:rusty: (looking at code) the ideal case would have proof length of 2 * log2, no?
06:43:38maaku:because since it is storing internal values it would look like the maaku case
06:43:54maaku:just with the block headers distributed throughout the tree differently.
06:44:59rusty:maaku: not quite, since depth 0 => 1 hash.
06:45:30rusty:maaku: (from - to) >= 1...
06:45:48rusty:maaku: but yes, the cases should be identical.
06:45:59maaku:ah right i missed the required 1 hash
06:47:35rusty:maaku: I'm really surprised how good naive is. I was going to write a hybrid "64k batches of breadth-first" tree, but I'm not sure it's worth it.
06:47:58maaku:rusty: naive would never be adopted in bitcoin core, sorry
06:48:30maaku:it's good for reference only. requiring recalculation of the entire tree for every block creation/validation is just too much work for this feature
06:49:44rusty:maaku: yeah, I was wondering about that. It seems like it should be incrementable (is that a word?), but maybe it would need N/2 blocks reshuffled every N == power-of-2?
06:50:22rusty:maaku: commented out I put a swapcount in the maakutree; you're right, it's *very* nice from that POV.
06:50:34gmaxwell:Well maybe it would be workable, given you can do a million sha256/sec on a general cpu... but there would have to be a darn good reason for it, ... normative datastructures are like that, there is a lot of pressure to make sure you've got it right, since your decision binds even people with much slower hardware to implement exactly the same thing.
06:51:39maaku:I'm not sure how it would possibly be an incremental structure? Every node in the tree changes position for each block
06:52:15rusty:maaku: no, consider the diagram above. Adding node 5 just means moving 4 down.
06:52:49rusty:maaku: I didn't think too hard about it though...
06:53:38maaku:rusty: ok sorry i was thinking ideal. naive is easy to make incremental
06:54:00rusty:maaku: oh yeah, ideal *sucks*. But batching it might be a compromise.
06:54:16maaku:i'm going to start calling ideal == optimal, and naive == array
06:54:36rusty:maaku: OK, sure. Will rename and push now.
06:54:45maaku:that's the terminology we had been using before
06:54:51maaku:sorry my reading comprehension fail :\
07:35:13maaku:rusty: so I'm surprised the naive array does so well ... even better than 'optimal' over 1M blocks
07:35:38maaku:oh he's gone
07:39:44maaku:gmaxwell: so I thought rusty might have made a mistake but so far the code looks alright
07:42:11gmaxwell:maaku: about? that structure he's describing sounds like the plain insertion ordered merkel tree that PT called MMR just needs log extra storage to do efficient appends. (Sorry, busy with four other things and haven't been following conversation)
07:46:38maaku:right, MMR is more efficient than any of the inner-node structures we were considering
07:47:22maaku:at least for a binary tree where having an inner node means expanding the tree to twice the depth
07:58:29maaku:if having an inner node with its own value plus two branches means that skipping that node requires 2 hashes (1 hash of value, 1 hash for untaken branch), then it's a net loss to use an inner node structure
07:59:20maaku:because you end up using more hashes to ellide the values along the way than you save from having a shorter tree
08:00:10maaku:so the MMR structure (which rusty is calling a naive merkle array) is better than any considered alternatives after about 1M headers
08:00:29maaku:also easier to implement, so yay
08:02:05gmaxwell:yep. sounds reasonable to me.
08:02:28omni:omni is now known as Guest99220
08:12:49maaku:petertodd: are there any existing implementations of merkle mountain ranges?
08:13:54petertodd:maaku: yup: https://github.com/opentimestamps/opentimestamps-client/blob/28333aacb9bf4551210d7ea6b94879b703b1ae51/opentimestamps/journal.py#L147
08:14:07maaku:petertodd: C++?
08:14:16maaku:ok thanks
08:14:25maaku:(Python will be useful too; I'll need both)
08:14:44gmaxwell:maaku: when you were commenting before about the cpu cost, I thought you were thinking of the same data structure but backwards. (which might be more efficient because long jumps in the unbalanced portions end up being the shorter hashes)
08:14:53petertodd:there's a on-disk version somewhere in there too...
08:16:39petertodd:the on-disk version is quite nice, as there's a very clear way to store all of the leaf hashes and inner hashes using the standard breadth first tree -> array mapping (forget what that's called exactly) works fine with sparse files too if you want to leave some of the data out
08:18:02petertodd:though there's non-trivial complexity in proving a number of statements about MMR's... so I wouldn't jump too quickly to using them without thinking through that stuff. (non-issue in timestamping)
08:22:41maaku:gmaxwell: not sure I parse what you mean by "but backwards"
08:23:13gmaxwell:putting the genesis block at the most unbalanced position, e.g. the version that cannot be efficiently appended to.
08:23:28maaku:petertodd: hate to sound naive, but it seems pretty straightforward, and most importantly its properties are holding up under experiment.
08:23:28maaku:which properties are you talking about?
08:23:57petertodd:maaku: I mean things like proving what index # a proof corresponds too
08:25:01gmaxwell:maaku: just things like making sure you don't have any cases of inner node / leif emulation and things like that.
08:25:55petertodd:RFC6962's merkle tree is a fair bit more clear in that regard; I'm working on implementing a prunable/summable version of it
08:26:37gmaxwell:petertodd: you might want to be mindful of the compression function input sizes, note that SHA224 has a nice property that two if its output fit in a single compression function input.
08:26:42petertodd:remember that MMR's are basically RFC6962 but with a small decrease in proof length at the cost of another layer of complexity
08:27:59petertodd:gmaxwell: yeah, I write code in Python :)
08:28:35petertodd:gmaxwell: I've done crazy low-level optimizations before, but am deeply skeptical of their place in security code
08:29:16gmaxwell:Well there is no free lunch. Thats a pointless halving in performance, ... if the performance isn't needed, better to use a more costly hash function. :)
08:30:33petertodd:a pointless halving of what may easily be 1% of the performance of your system...
08:30:37gmaxwell:(and if one doesn't like SHA224, SHA512/256 is well specified and also has room for metadata in one compression function run, and SHA512 appears to have improved security margin in any case)
08:31:25maaku:petertodd: the number of hashes involved here is significant
08:31:30gmaxwell:petertodd: perhaps, actually checking hashtrees in bitcoin already shows up quite visibly in the profiles. If there were several other proof hashtrees we were carrying around sha256 would be the number one thing in verifying blocks with already cached signatures.
08:32:07petertodd:which gets to my bigger point: if you're architecture for bitcoin is sufficiently poorly scaling that you care about this stuff, you're going down a bad path
08:32:49maaku:actually after re-reading the MMR doc, I don't think that's what rusty implemented at all
08:33:05gmaxwell:petertodd: I don't agree, all these decisions scoot around the amount of deceneralization you can get for a given scaling level. They also have significance when you want to talk about extracting proofs for embedded devices (and for using inside other ZKP).
08:33:13petertodd:maaku: I'll take a wild guess he made RFC6962 instead...
08:33:27maaku:looking at it now to compare
08:33:44maaku:petertodd: the validation is incremental and cheap, but that doesn't mean we shouldn't care about performance to regenerate/revalidate the current state
08:33:53petertodd:gmaxwell: if ZKP matters to you, specify that as an explicit requirement - ZKP can easily result in needs that *against* normal scaling requirements
08:34:09maaku:or not be concerned about low power, embedded devices
08:34:11petertodd:gmaxwell: +-50% differences don't interest me in the slightest
08:35:27gmaxwell:Good for you.
08:35:50petertodd:the #1 goal of this stuff needs to be to make it understandable and obvious. for instance replacing the easy to read SignatureHash() function with that horrendous class was a huge step backwards for people's understanding of how signature hashes actually work - I routinely have to tell people to look up the old copy when they ask me how sig hashes work
08:36:39maaku:not everyone would agree. i'm going to leave it at that.
08:37:38petertodd:gmaxwell: don't forget I've actually spent the majority of my career *way* in the other direction, creating stupidly resource constrained hardware - I've written thousands of lines of uC assembler, VHDL, etc. but the costs of that kind of development are horrendous
08:39:17maaku:yes, rusty implemented rfc6963, in terms of tree structure at least
08:40:07Dr-G3:Dr-G3 is now known as BillGates
08:40:17BillGates:BillGates is now known as Dr-G
08:40:31maaku:an efficient serialization of standard bitcoin merkle trees with the right-hashes-elided trick might work too
08:41:43maaku:(basically the same thing in terms of proof size)
08:46:56petertodd:right-hashes-elided trick?
08:47:43maaku:petertodd: what tiernolan has been posting about recently w.r.t merged mining
08:48:13petertodd:maaku: where's this?
08:48:31maaku:the observation that if you have a merkle tree of 2^n + 1 items, the right-most path could be fully elided by specifying the root left branch and the right most item
08:48:48maaku:bitcoin-development, maybe 2 weeks ago?
08:48:54maaku:he has a 2013 post on bitcointalk with an earlier version of it
08:48:57petertodd:oh right, that one where they're collapsed
08:49:29petertodd:pretty sure there's at least one earlier publication of that idea FWIW
08:50:03maaku:sure, probably
08:53:29maaku:petertodd: so what's the reasoning for 'bagging the peaks' in MMR vs an rfc6962 like construction?
08:54:20petertodd:maaku: makes inclusion proofs from most recent leafs to tip as cheap as possible (amortised) - but that's only relevant in timestamping
08:55:12maaku:rfc6962 does that, no?
08:56:25petertodd:not quite as cheaply, but it's so close I'm not sure the difference is worth the effort
08:56:46petertodd:provind index # in rfc6962 is dead obvious for instance
08:59:45petertodd:btw another valuable thing to be able to do is prove T_{i+1} is a superset of T_i, or more generally, T_{i+j} is a superset of T_i
09:01:43petertodd:actually, superset is the wrong term for this... sucession? ie, with tree T_i that commits to array A, I want to show that if I append item a_{i+2} I get tree T_{i+1}
09:02:39lclc_bnc:lclc_bnc is now known as lclc
09:03:35maaku:is there a succinct way of determining the depth of index i of an MMR with N nodes?
09:04:11petertodd:of the top of my head I'm going to say probably not - wasn't a design requirement for what it was designed for
09:05:17tepper.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
09:05:17tepper.freenode.net:Users on #bitcoin-wizards: andy-logbot soundx GAit cbeams moa siraj rusty adam3us Guest99220 vmatekole koshii paveljanik MoALTz copumpkin Guest13730 harrow BananaLotus optimator c0rw1n_ coiner TheSeven Iriez artifexd_ Dr-G Shiftos atgreen Adlai super3 waxwing_ jtimon iambernie nuke1989 hashtagg grandmaster SDCDev gribble Guest98671 Luke-Jr bosma prodatalab kumavis_ shesek maaku SubCreative Greed Aquent Graftec v3Rve Guest2104 nsh coryfields bsm117532 ahmed_ isis jgarzik
09:05:17tepper.freenode.net:Users on #bitcoin-wizards: BigBitz coutts BlueMatt cfields Anduck berndj Graet jaromil Guest29370 Eliel gavinandresen nanotube sl01_ PaulCapestany lclc hguux_ spinza luny AdrianG_ Muis kanzure jbenet mappum bobke_ lnovy michagogo OneFixt wizkid057 digitalmagus gwillen kobud Grishnakh samson_ kyletorpey CryptOprah btc__ ryanxcharles DougieBot5000 Starduster wumpus [\\\] MRL-Relay Dyaheon- phantomcircuit stonecoldpat morcos pi07r eric dansmith_btc tacotime NikolaiToryzin
09:05:17tepper.freenode.net:Users on #bitcoin-wizards: mkarrer ompik [d__d] Emcy toddf heath go1111111 Transisto bbrittain huseby DoctorBTC Apocalyptic EasyAt tromp espes___ null_radix hollandais midnightmagic starsoccer danneu catcow TD-Linux ryan-c roasbeef smooth mmozeiko Alanius epscy JonTitor fluffypony livegnik burcin poggy Taek throughnothing petertodd crescendo so helo Keefe phedny BrainOverfl0w sipa harrigan sneak s1w Logicwax yoleaux azariah kinlo iddo Guest38445 HM2 Fistful_of_Coins
09:05:17tepper.freenode.net:Users on #bitcoin-wizards: warptangent lechuga_ andytoshi pigeons gmaxwell Nightwolf eordano btcdrak comboy Meeh @ChanServ a5m0 nickler_ K1773R asoltys_ nsh_ jchp LarsLarsen fenn Krellan warren
09:39:03lclc:lclc is now known as lclc_bnc
10:07:05lclc_bnc:lclc_bnc is now known as lclc
10:45:26rusty:maaku: so, I implemented "spv path back to N != 0", and tidied things up a a bit.
10:47:13rusty:maaku: the supremacy of "optimal" is even clearer if you ask it to go back from block 5M to 5M-150.
10:47:17rusty:array: proof hashes 142-732(403.4+/-2.6e+02)
10:47:17rusty:optimal: proof hashes 33-101(63.4+/-29)
10:55:25lclc:lclc is now known as lclc_bnc
11:11:13lclc_bnc:lclc_bnc is now known as lclc
11:11:59rusty:maaku: sleep time for me. Latest variant pushed, w/ two new batching trees.
11:12:47rusty:maaku: Back tomorrow, but I want to try optimizing the proof length directly rather than the path length. Maybe cache the "best" 128 blocks on the LHS of the tree, see if that wins...
11:13:08rusty:rusty has left #bitcoin-wizards
12:43:08lclc:lclc is now known as lclc_bnc
14:31:59waxwing_:waxwing_ is now known as waxwing
15:33:41Guest13730:Guest13730 is now known as amiller
16:19:06Pan0ram1x:Pan0ram1x is now known as Guest89604
16:23:19Guyver2:Guyver2 has left #bitcoin-wizards
19:47:30Guest29370:Guest29370 is now known as mr_burdell
19:53:07artifexd_:artifexd_ is now known as artifexd
20:09:34kanzure:adam3us: typos? anti-relay -> anti-replay
20:30:43omni:omni is now known as Guest17853
21:06:57amiller:adam3us1, everyone meant "anti-replay", not "anti-relay", but you seem to be responding as though you read it as "anti-relay"
21:07:40amiller:or maybe you didn't and it's a benign typo in either case
21:08:44sipa:a proof-of-non-relay seems quite impossible :)
21:18:13gmaxwell:sipa: it's called quantum encryption? :P
21:18:29sipa:Hey. No novel assumptions.
21:55:35bosma_:bosma_ is now known as bosma
21:56:35c0rw1n_:c0rw1n_ is now known as c0rw1n
22:04:25omni:omni is now known as Guest20839
22:26:31Profreid_:Profreid_ is now known as Profreid
23:43:42rusty:maaku: OK, so I
23:44:12rusty:'ve implemented optimization-by-prooflen (rather than optimization by pathlen, then convert to proof). It does make a difference.
23:44:56rusty:eg: (1M blocks, 100 runs):
23:45:00rusty:optimal: proof hashes 315-2143(1031.7+/-5.6e+02) <- old result
23:45:11rusty:prooflen-optimal: proof hashes 265-1719(818.2+/-4.4e+02) <- new result
23:45:34rusty:20% smaller CSPV proofs, basically
23:45:41sipa:how is the size of blocks relevant?
23:46:00rusty:sipa: the size of the merkle proof, not the block...
23:46:18sipa:ah, 1 million blocks, not 1 MB blocks; sure
23:46:30sipa:rusty: i've previously implemented that too
23:46:40sipa:i don't think i have that code anymore
23:46:54rusty:sipa: cool! I'm playing with different topologies for the prevblock merkle.
23:47:07sipa:trying to remember what structure i used
23:47:25rusty:sipa: see https://github.com/rustyrussell/rusty-junkcode (particularly test-trees.c)
23:48:13rusty:sipa: topology makes a big difference. But it was only last night I realized that optimizing SPV # blocks then converting to proof was kinda dumb.
23:48:52rusty:sipa: so implemented that now instead of doing actual work :)
23:49:23sipa:now, one interesting thing is that even a 'skiplist' with only a single reference back (in addition to the direct parent), with exponentially distributed distance back is enough to get within (iirc) a constant factor of optimal
23:50:31rusty:sipa: Hmm, interesting!
23:50:50rusty:sipa: how did you select distance back for block N?
23:51:54rusty:(I'm trying to parse "exponentially distributed distance back" with "single reference back"... and failing?)
23:52:27sipa:so every block has two references back, its direct parent, and another
23:52:45rusty:Yep, how do you select "another"?
23:52:51sipa:that other is 50% of the time 2 back, 25% of the time 4 back, 12.5% of the 8 back, ...
23:53:07rusty:sipa: ah, nice. Let me simulate that...
23:53:19sipa:iirc it's something like 4x worse
23:53:31rusty:* rusty continues to ignore that module bug email...
23:53:38sipa:https://github.com/bitcoin/bitcoin/blob/master/src/chain.cpp#L64-L73 <- this is a slightly more complex formula, used in bitcoin itself now
23:53:54sipa:after trying a dozen functions or so, this one was the best
23:55:11rusty:sipa: weird, I didn't know bitcoin had this already!
23:55:26sipa:this is just an implementation detail, to speed up iterating chains
23:55:32gmaxwell:rusty: totally different application, thats an in memory data structure used for node-internal blockchain management.
23:56:01rusty:sipa/gmaxwell: thanks.
23:56:02sipa:in cases where only the size of a proof and not the entire storage matters, this is definitely suboptimal
23:56:16sipa:but it's interesting that it's only a constant factor (but a significant one) worse
23:56:36sipa:(iirc, again, this is just how i remember the numbers i saw - no formal proof or anything)
23:57:47rusty:sipa: well, I'll code it up anyway, since I'm here. Plus, procrastination...
23:57:47gmaxwell:the difference in the comitted structure is that you don't have free choice of where you go back unless you commit to all. So e.g. just a constant single backref would usually just not be accessible.
23:59:21gmaxwell:When we were expirementing before I just used octave backrefs with the optimization that if an already included shorter backref was a better path to genesis, in the DP solution, I omitted it. But maaku pointed out that results in long paths to get to anything except genesis.