--- Log opened Mon Jul 08 00:00:19 2013 13:44 < jgarzik> petertodd, definitely leaning towards Type 1 (sacrifice accounce/commit) and Type 2 (optional single-tx timestamping) SINs. The latter are essentially disposable SINs. 13:45 < jgarzik> Type 1 sacrifice buys your way onto the identity alt-chain 13:46 < petertodd> Well if you want to sacrifice to mining fees there just aren't any other options rightnow. 13:47 < petertodd> (unless you want to involve miners and do coinbase txout, inconvenient) 13:49 < jgarzik> petertodd, nod 13:49 < jgarzik> petertodd, Just noting there will be a sacrifice-free SIN, in addition to current 13:50 < jgarzik> petertodd, call it permanent or disposable SINs. Your disposable SIN might be used on one website only, optionally linking back to the permanent SIN if you desire to digitally sign that fact. 13:51 < petertodd> Does a sacrifice-free SIN really need to be timestamped as a transaction directly? 13:54 < jgarzik> petertodd, Need? No. Hence "optional". There might be some value in proving a SIN did not exist before X date. 13:54 < jgarzik> petertodd, a disposable SIN could be created entirely privately, a la a bitcoin address 13:54 < jgarzik> with no network activity 13:55 < petertodd> Thinking the SIN could be timestamped by a merkle-path to a block header. 13:55 < petertodd> I'd suggest separating the idea of the sacrifice and timestamp conceptually, and making both simply be "by whatever means" 13:56 < jgarzik> petertodd, whatever provably means 13:57 < petertodd> Sure, point it, in the software have a master key, have a proof of sacrifice for that bit of data, and have a proof of timestamp. Often the two will actually use the same data, not always though. 13:57 < jgarzik> petertodd, Partially agreed, though: the point of the specification was to take the theory of decentralized identity and turn it into something people could reasonable implement and interoperate with. Practical levels of interoperation, out of the box, means some details will be defined quite specifically by default (like method of timestamping, which chain shall be used for timestamping) 13:57 < petertodd> ...and nothing wrong with more than one sacrifice attached 13:57 < jgarzik> agreed 13:58 < petertodd> Sure, I just worry you're creating a bunch of very specific special purpose code, where more general is better. 13:58 < jgarzik> I want to get the identity alt-chain demo-able (if not usable) out of the gate, too 13:58 < jgarzik> petertodd, understood. though make it too general and nobody interoperates usefully ;p 13:59 < jgarzik> It is easy enough to change details like sacrifice minimum cost, sacrifice or timestamping chain used for validation 13:59 < petertodd> Well, remember I've got the experience of making a general timestamper, and in the end it turned out to be really not a big deal basically. 13:59 < petertodd> About as easy as a Bitcoin tx specific one 14:02 < jgarzik> The optional timestamping is not really an important component of disposable SINs. Just thought it might be useful. 14:02 < petertodd> Well, leave it out then for v0.1 :) 14:02 < jgarzik> Could leave it out entirely, and let users solve that problem in whatever way they wish. 14:02 < jgarzik> :) 14:02 < petertodd> Making SINs scarce is the innovative thing anyway 14:03 < jgarzik> yep. and with disposable SINs you are given a choice between the two. 14:03 < petertodd> what language are you looking at implementing it in first anyway? 14:04 < jgarzik> petertodd, sadly for python fans, probably javascript. I'm ultimately a C programmer, FWIW, so my left-to-own-devices choice would be that, create a "libsin" in C. Might do that eventually anyway. 14:05 < jgarzik> JavaScript looks C-esque (personal taste), seems faster than python, and is browser friendly. 14:05 < petertodd> javascript is good for web stuff, not a bad choice 14:06 < petertodd> I always knew opentimestamps would need javascript client libraries to be really useful 14:07 < jgarzik> indeed 14:09 < petertodd> Incidentally, proof-of-sacrifice can be used to make a inherently 51% proof alt-chain. 14:09 < jgarzik> petertodd, in a mostly unrelated note, txtool will be gaining easy ability for people to create timestamping OP_RETURN transactions 14:09 < petertodd> Ah cool 14:10 < petertodd> You design your chain such that to create coins on it you need to sacrifice Bitcoins, and at the same time that sacrifice is how consensus is determined. 14:10 < jgarzik> petertodd, I still need to review IRC chat notes, and think through how the identity alt-chain might work. For convenience' sake, it might be useful to have a chain that is not PoW at all, but is provable through timestamping + sacrifices in another chain. 14:11 < petertodd> You also allow users to sacrifice the alt-coins to mine blocks as well. Now this *isn't* proof-of-stake because given a jam-proof-network you are in fact giving up something of value. 14:11 < jgarzik> petertodd, i.e. a bitcoin sacrifice could grant the right to update the identity chain 14:11 < jgarzik> thus paying in bitcoin to update the identity database 14:12 < petertodd> The trick is that any attacker trying to 51% the blockchain for profit has the problem that they have to spend as much as the history is worth to people - a double-spend doesn't work because the person you are double-spending will sacrifice up to 100% of what you are gaining, and on top of that you'll affect third-parties with similar incentives. 14:13 < petertodd> Such a "blockchain" can easily be done as one tx per block, and can be done as a DAG. In the case of zerocoin, the accumulator is inherently serial though, so a dag doesn't make sense. 14:13 < petertodd> However... this does mean the zerocoin blocks can be created at the same rate as zerocoin tx's can be verified, completely bypassing the crazy slow verification problem, especially when you further couple it with fraud proofs and nodes only verifying part of the chain. 14:14 < petertodd> jgarzik: Makes sense, sounds like my zookeyv protocol. 14:14 < jgarzik> petertodd, zookeyv? 14:15 < petertodd> Add in some decent primatives for trading zerocoins to bitcoins and you have a solid way to bolt on zerocoin to bitcoin without performance issues. 14:15 < petertodd> jgarzik: what I'm calling the key-value store I originally mentioned to you; named after zooko's triangle 14:15 < jgarzik> petertodd, gotcha 14:16 < jgarzik> petertodd, indeed, the identity database is ultimately a key/value database 14:17 < petertodd> jgarzik: you planning on letting people grab human readable names? 14:18 < jgarzik> petertodd, well the toplevel is a flat SIN namespace. Under that, key/value pairs attached to each SIN. In theory, each SIN could assert name.real="Garzik, Jeff" 14:20 < petertodd> jgarzik: hmm... tricky. Remember that with any consensus system what maps to what is up to the biggest spender. 14:22 < jgarzik> petertodd, Updates to each SIN are validated by MPK digital signatures. At least that bit is easy to prove. 14:23 < jgarzik> petertodd, If the alt-chain is wholly depending on timestamped transactions in the bitcoin blockchain, the consensus problem becomes making sure everybody sees the same view of data, when parsing the blockchain. 14:23 < petertodd> jgarzik: Right, but someone can even rewrite the chain so the updates didn't happen. 14:23 < jgarzik> a lot simply depends on the alt-chain design itself 14:24 < jgarzik> nod 14:24 < petertodd> Only if the data itself - or at minimum the hashes of the pairs - is in the blockchain. 14:24 < petertodd> er, I mean H(key) H(value) 14:24 < petertodd> heck, OP_RETURN H(key) H(value) :) 14:25 < jgarzik> In this scenario, I imagined each alt-chain update would require a bitcoin sacrifice transaction that includes a hash of the record update 14:25 < jgarzik> obviously there are other validations that must occur, before it can make it into the alt-chain 14:26 < jgarzik> but that would be the anchro 14:26 < jgarzik> *anchor 14:26 < jgarzik> H(alt chain transaction) 14:26 < jgarzik> which would include SIN, key and value 14:26 < petertodd> H(alt chain tx) is no good because that leaves open the possibility of a withholding attack 14:27 < jgarzik> petertodd, that's true of any hash, though 14:27 < jgarzik> petertodd, otherwise you're back to OP_RETURN 14:28 < petertodd> No, because if I want to associate name:"Peter Todd"==0x12345 I can determine if there exist any H(name:"Peter Todd") in the blockchain and outspend their sacrifices 14:28 < petertodd> Without that I can never know if someone has a sacrifice waiting to be published 14:30 < jgarzik> I guess you could consider it all one big key/value namespace, if you prefix every key with the SIN being updated 14:31 < jgarzik> key="1234-5678-9abc name", value="Garzik, Jeff" 14:31 < jgarzik> key="1234-5678-9abc age", value="38" 14:31 < petertodd> Right, but if it's just sins as keys, what do you really need consensus for? 14:32 < petertodd> Just make it a big gossip network with anti-DoS 14:32 < jgarzik> petertodd, The problem being solved by the alt-chain is admittedly not consensus, simply decentralized storage and maintenance of the identity database. 14:32 < jgarzik> I don't think DHT will offer good disconnected operation 14:33 < jgarzik> thus looking at a replicated db like an alt-chain 14:34 < petertodd> Yeah, although having said that, consensus can still be useful: consensus about the overall contents of the global database. 14:34 < jgarzik> bitcoin-the-database-technology :) Google for "D1HT", an acronym I just learned last year 14:34 < petertodd> Leave the contents themselves to SomeOtherDatabase(TM) 14:34 * jgarzik couldn't believe they invented a new term for "copy the whole damn database to everyone" 14:35 < petertodd> ha, yeah d1ht's are funny 14:35 < petertodd> Note though that for consensus on overall contents all nodes actually need to store is the list of 64-bit truncated hashes of every db item. 14:36 < petertodd> (2nd-preimage is sufficient, maybe do 80-bit or 128-bit if you want to be really safe) 14:38 < jgarzik> petertodd, the solution does need a consistent overall view of the global identity database 14:40 < petertodd> perfect, make sacrifices commit to that overall view then 14:40 < petertodd> (or commit to being part of a dag) 14:40 < jgarzik> petertodd, hmmmmm, indeed 14:41 < jgarzik> petertodd, need to figure out how to resolve a race, then 14:41 < jgarzik> petertodd, i.e. two conflicting identity db databases arrive in parallel, and make it into same block 14:41 < jgarzik> er, identity db updates 14:42 < petertodd> highest sacrifice... which is zookeyv, but if it's really just SIN=value that will only happen accidentally 14:44 < petertodd> Sometime else to keep in mind is that sacrifice/byte is a good way to do anti-spam - tier the database and give nodes the option of dropping the lowest tiers. 14:45 < petertodd> Or simply order every bit in the database and drop everything about n GB 14:45 < petertodd> s/every bit/every record/ 14:46 < jgarzik> The identity database just needs to serve the latest version of a SIN's key/value pairs, so updates are insta-prunable (modulo the obvious buried-in-chain safety factor) 14:47 < jgarzik> i.e. answer queries such as $value = lookup($sin, "name") 14:47 < petertodd> Sure, but total bytes are still important 14:48 < jgarzik> agreed, though not sure how you would tier this database 14:48 < jgarzik> any active record could potentially be queried 14:48 < jgarzik> idea was to create an anti-spam barrier up front, in sacrifice-to-update-db 14:48 < petertodd> Nodes just have to contribute what they can 14:49 < jgarzik> but then drop nothing (I hope? ) 14:49 < petertodd> Point is, what is your overall resource consumption model going to be? There *have* to be limits overall 14:49 < jgarzik> a fair point and open question. maybe identities should retire, and require republishing (at a cost) 14:50 < jgarzik> to maintain the database, and expire old stuff 14:50 < petertodd> Something... but figure it out in advanced 14:50 < petertodd> *advance 14:50 * jgarzik kicks xchat 14:51 < petertodd> heh 16:09 < amiller> i want to talk about p2ptradex 16:09 < amiller> you guys read this post? https://bitslog.wordpress.com/2013/05/20/p2ptradex-back-from-the-future/ 16:25 < gmaxwell> amiller: what about it? ... results in enormous transactions to have any real degree of cross chain proof, and even then only gets you spv security. 16:25 < amiller> i don't think any of that is necessarily true 16:25 < amiller> first of all it doesn't have to be about transaction size, proof size can be amortized for many transactions 16:26 < gmaxwell> The first is true so long as headers are a singly linked list. 16:26 < amiller> under normal conditions, two blockchains are perhaps roughly synchronized 16:26 < amiller> you could merkle tree over the headers and go down to log 16:26 < gmaxwell> The second is true so long as you don't comingle the consensus of the two chains. 16:26 < amiller> you don't have to do full validation 16:26 < gmaxwell> amiller: only by changing the headers. 16:26 < amiller> the thing is you can be asymmetric in two ways 16:26 < amiller> like if i am trading my bitcoins for your litecoins 16:27 < amiller> i don't really care if the bitcoin side gets canceled 16:27 < gmaxwell> amiller: no, but I sure do. 16:27 < amiller> i'm only concerned that the bitcoin side goes through and litecoin gets canceled 16:27 < amiller> right 16:28 < amiller> so i am happy if the bitcoin side just trusts litecoin at face value 16:28 < gmaxwell> I mean the _whole_ point of doing anything fancy there is to control the cancelation behavior, otherwise you can just do joint secret locked outputs. 16:28 < amiller> i don't care if the bitcoin chain only does spv validation of litecoin because i'm going to be just as vulnerable to litecoin anywa 16:28 < amiller> likewise you'll be happy if litecoin does only spv validation of bitcoin 16:29 < amiller> because you're going to end up with bitcoins anyway and if spv isn't good enough then something horrible has happened 16:29 < gmaxwell> amiller: say we're going to trade 1000 BTC worth of coins and I can buy computing power at near mining cost rates on the open market. 16:30 < gmaxwell> how big must the transactions be before its not cheaper to mine bogus blocks instead of completing the transaction? 16:31 < amiller> right so the tricky case is when there's a big disparity in mining power between the two chains 16:31 < amiller> but lets say we agree on the price 16:31 < amiller> it's proportionally a much bigger transaction on the tiny litecoin chain 16:31 < amiller> so i should correspondingly wait much longer before i'm sure 16:32 < gmaxwell> just assume it's 'bitcoin to bitcoin' if you will. I still think the result ends up ugly. 16:32 < amiller> the proof doesn't all have to be in the transaction, i think sdlerner's particular solution is wrong and ugly but the key idea works 16:34 < amiller> like assume you can use something like the hash-value-highway to get a concise aggregate sample of work 16:34 < gmaxwell> even a cut and choose compression of the headers ends up being quite large. 16:34 < amiller> basically since there are tiny trivial litecoin blocks so frequently, it would suck to try to say that bitcoin has to validate two weeks worth of ltc blocks before comitting the transaction 16:35 < gmaxwell> amiller: I think the bitcoin bitcoin case sucks too, as mentioned. even when you get to dozens of headers the transaction is rather enormous. 16:35 < amiller> but if i'm going to end up with litecoin anyway, i'm okay if bitcoin only does concise work-sampling validation 16:35 < amiller> if there is a lot of volume of btc to litecoin trades then we can all amortize the validation 16:35 < amiller> there's no reason each individual transaction has to repeat the whole process 16:36 < amiller> there's maybe a scheduling/batching challenge in there 16:36 < gmaxwell> and any subsetting case will still need n bits of selection where n is fairly large compared to work. 16:36 < amiller> that's not true i don't see why you'd say that? 16:36 < gmaxwell> amiller: yes if you comingle the consensus algorithim, and effectively merge the chains— requiring all full validators to validate both, it obviously works. 16:36 < amiller> no i'm saying it doesn't require full validation 16:37 < gmaxwell> amiller: because if your sample is just one point then a single lucky block can rob all concurrent spends. and also may take forever to come, leaving the transactions stuck for a long time. 16:38 < gmaxwell> amiller: if it's not full validting that surprise its just spv security. And SPV is quite weak when you have an information hiding risk. 16:38 < gmaxwell> So you need a lot of header proof to make SPV with a hiding risk not laughably bad. 16:38 < amiller> what do you mean 16:38 < amiller> i don't follow what you mean by informtion hiding 16:38 < amiller> if you mean errors in transactions then header doesn't solve that anyway so i don't know what you mean 16:39 < gmaxwell> As I said before, consider a 1000 BTC trade "bitcoin to bitcoin" via this mechenism. Say you require 12 headers. I can buy that computation for about 300 BTC. A big profit to cheat. The inner validation only knows what you tell it, it can't go out and discover that there is a longer chain far ahead of that one. 16:40 < amiller> that's true of any btc transaction with the threat of double spending 16:40 < gmaxwell> No, it's not— because you can find out that there is a longer chain, so that someone spending weeks to produce a 12 header stub does no good, as the whole world has moved along. 16:41 < gmaxwell> SPV in information isolation requires only energy. SPV when there is no isolation requires energy at high power. 16:42 < gmaxwell> I think this is a tangent in any case. 16:42 < amiller> the rules for applying include an amount of work in both chains 16:42 < amiller> so it's not just 12 headers at any time 16:42 < amiller> but 12 bitcoin headers before say 60 headers of litecoin 16:42 < amiller> 60+epsilon 16:43 < gmaxwell> you can't be guaranteed any particular processing speed— especially for your jumbogram transaction. 16:43 < amiller> if i'm confident i'm going to learn about 60 litecoin headers before you learn about 12 bitcoin headers, then i'm okay 16:44 < amiller> the point is we are both taking bets about the rate of proof-of-work of the chain we're going to end up on 16:44 < amiller> and any substantial change in that would make us vulnerable to double spends where we end up anyway 16:44 < gmaxwell> And this accomplishes exactly what? 16:45 < gmaxwell> A _trivial_ protocol already reduces this problem to pure holdup risk. 16:45 < amiller> right so i'm solving the holdup risk for a cross-chain transaction, up to the same security guarantee we have against double-spending in an individual chain 16:46 < gmaxwell> except you're not. Because the transactions cannot be mined atomically in both. 16:47 < gmaxwell> The rates of the two chains might be a nice constant ratio, but the _start time_ has no particular reason to have a non-zero offset in the two chains. 17:26 < amiller> ok i almost worked it out 17:26 < amiller> difficult to explain, this may take a few tries 17:27 < amiller> i'm giving you my bitcoins and you're giving me your litecoins, but suppose i'm able to produce a short proof that the the litecoin chain has moved on several blocks *without* having your end of the transaction on it 17:27 < amiller> i should be able to present that proof to the bitcoin chain and use it to cancel my sending bitcoins to you 17:31 < gmaxwell> right, okay, so you need a UTXO proof, plus headers. 17:31 < amiller> not full headers, less than spv 17:31 < amiller> just a work sample 17:31 < amiller> that can be seriously small 17:32 < gmaxwell> Be concrete. I know ways to reduce enormous amounts of work to merely large, but I'm not seeing how you actually get something compact. 17:32 < gmaxwell> and a utxo proof is log(total utxo) 17:34 < gmaxwell> (the two ways I know to reduce enormous amounts to large is the hash highway method, and hash highway I think you need a header format change or you can't show the headers are related, or non-interactive cut an choose) 17:34 < amiller> header format change yes 17:34 < amiller> the noninteractive cut and choose isn't necessary 17:35 < amiller> basically i don't need to assert that the header samples form a valid chain 17:36 < gmaxwell> you do need to assert they came after the utxo proof connected header. 17:36 < amiller> i just have to show that they are very unlikely to be constructed without the minimum amount of work, and that they all occurred after some deadline (meaning there's some path of preimages that leads to some origin point of interest) 17:36 < gmaxwell> s/came after/ are connected to. 17:39 < gmaxwell> amiller: otherwise I mine a single fake litecoin block with a fake utxo committment and give you that and a dozen real litecoin headers. 17:40 < amiller> hm, right, so i should check that the utxo commitment associated with each block couldn't have had data in it that contradicts my claim (that the transaction i care about has not shown up) 17:41 < gmaxwell> yea... so 800 bytes per block... :( 17:43 < amiller> if that's the only thing to grimace at i'm happy 17:43 < amiller> imo this is a building-block for not-necessarily-global blockchains 17:43 < gmaxwell> by per block I mean per block in your proof. 17:44 < amiller> yes i know 17:44 < amiller> if there's a lot of volume of btc to ltc transactions then we can all amortize the validation of work 17:44 < gmaxwell> well the utxo membership proofs can't really be substantially combined. 17:45 < amiller> yes but i only need it on the last one if there are canonical litecoin headers already 17:45 < gmaxwell> canonical litecoin headers implies full nodes validating litecoin blocks. 17:46 < amiller> either way this is just a possible optimization 17:46 < amiller> lets keep consideing the worst case where i am the only one using this trade path and so i have to pay for the entire validation 17:47 < gmaxwell> that in and of itself is a residual hold up risk. 17:48 < gmaxwell> e.g. I can at least extort the value of that refund minus epsilon assuming the non-iterated interaction. 17:48 < amiller> lets decide we figure out what that price will be and set an appropriate length of time 17:48 < gmaxwell> I'm not sure how much of a real risk holdup actually is. 17:48 < amiller> does this solve the race condition 17:48 < amiller> i still can't put my finger on how to state this 17:48 < gmaxwell> The interesting thing is that it's always been possible to do secure-except-holdup cross chain transactions— and no one is doing it. 17:49 < gmaxwell> But you can't say that holdup is some enormous scare factor because plenty of people do totally insecure cross chain trades. 17:49 < gmaxwell> I have a feeling that holdup isn't actually a big problem. It's a problem— but you could just add a little bit of reputation or identity and basically eliminate it. 17:50 < petertodd> All the evidence that the holdup happened can be right in the blockchain making the reuse problem fidelity bonds face much easier to solve. 17:50 < gmaxwell> (or at least reduce it to the point where that kind of solution is cheaper— even considering the weighed failures— than the infrastructure required and the direct costs for your proof-refund txns) 17:51 < amiller> i'm aiming bigger, if this is solvable then it's useful for local rather than global chains 17:51 < gmaxwell> petertodd: right, you can even say a foo-bond can only be used for one txout at a time. 17:52 < petertodd> gmaxwell: exactly 17:52 < gmaxwell> amiller: I realize this, as a fundimental way of making thing scale better. ... making the global chain a metachain that validates cross chain transactions, effectively. In which case its reasonable for the local chains to all watch the global chain but not viceversa. 17:52 < amiller> right 17:52 < amiller> yeah... well put 17:53 < petertodd> worst comes to worst, use the global chain for consensus on the fidelity bonds 17:54 < petertodd> And the existence of a global chain can be used directly for your proof-of-work algorithm via proof-of-sacrifice. 17:57 < amiller> ok so along the way, at the very least we've talked just now about a new result for SPV verification 17:57 < amiller> you can sample work and show that a coin *is still available/unspent* without even having to validate all the headers 17:58 < petertodd> ? I missed how that works 17:58 < amiller> petertodd, do you know the work-sampling idea 17:59 < petertodd> amiller: no 18:00 < amiller> petertodd, https://bitcointalk.org/index.php?topic=98986.0 18:00 < amiller> if you have some big collection of blocks, and you want to estimate the total amount of proof-of-work used to create them all, you can do that just by sampling a really small number of them 18:01 < amiller> if there are a million blocks with at least two zeros 00xxxxx 18:01 < petertodd> right, seems obvious enough 18:02 < amiller> then there are probably at least a hundred blocks with several more zeros 00000xxx 18:02 < gmaxwell> amiller: works for large numbers, not so much for small numbers though.. and that doesn't prove they're connected, unless the structure is changed to link along the hash highway. 18:03 < amiller> the structure can be changed pretty efficiently to have a sort of skip-list like thing to make it easier to produce that sample 18:03 < amiller> for spv it's not necessary to prove they're connected, you just have to prove they all don't disagree 18:04 < petertodd> amiller: merkle mountain range: https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md 18:04 < petertodd> how are you going to show they don't disagree? 18:04 < gmaxwell> I'm not actually sure if thats better for proving header difficutly than a straight non-interactive cut and choose. The later is easier to put proofs in just some blocks. 18:05 < amiller> petertodd, by showing that each member of the sample commits to a utxo and that each utxo still has the transaction in it i want to prove still exists 18:05 < gmaxwell> petertodd: you repeat the proof for each block— e.g. it's unspent here and here and here and here. you don't need to show they're connected. 18:05 < gmaxwell> big proof though. 18:06 < amiller> gmaxwell, i think you might be right about cut and choose working just as well 18:07 < amiller> in any case it's basically just possible to do this 18:09 < petertodd> amiller: Why not just do a binary search? 18:10 < petertodd> amiller: Oh wait, I'm dumb... 18:11 < gmaxwell> its kinda sad no one has proposed a non-interactive cut and choose to faster bootstrap spv. 18:11 < amiller> i guess i still don't know how to efficiently prove that it wasn't spent in the last 10 blocks, because you can fake that work easier 18:11 < petertodd> Well, SPV bootstraps pretty fast anyway... 18:11 < amiller> i think i worked out that you could sample work more finely towards the front and get some benefit 18:12 < petertodd> amiller: Proving a coin wasn't spent recently is always going to be insecure - you only have a recent mined block as witness. 18:12 < gmaxwell> petertodd: they're distributing "checkpoints" with SPV clients now to make them bootstrap fast. :( 18:13 < petertodd> amiller: I mentioned to TD earlier today the idea of miners committing to a merkle tree of txids in their mempool, just to prove visibility, you could use that if the commitment included txins being spent. 18:13 < gmaxwell> (though their checkpoints aren't the same kind of thing the reference client has— at least in bitcoinj based stuff they're a "if you can connect back at least this far, the sum of the rest of the diff is Y", as far as I understand it) 18:14 < petertodd> gmaxwell: What? True, I guess on a cellphone ~100MB adds up or whatever it is... 18:14 < gmaxwell> well it's 20mbytes right now. 18:14 < gmaxwell> but the fetching isn't very efficient. 18:14 < gmaxwell> e.g. not pipelined. 18:15 < petertodd> gmaxwell: What do you mean by pipelined? You just mean we can't ask for more than one block header at a time? 18:17 < gmaxwell> I thought they did scalar fetching instead of piplelining, but I might be incorrect. I'm going by what I've seen from logged getheaders but perhaps I'm just missing them setting the count to >1. 18:17 < gmaxwell> Otherwise I don't really understand the reason for the optimization. 18:18 < petertodd> gah, powers out, wonder how long the ups's at work last... 18:18 < petertodd> gmaxwell: TD's NSA handlers? 18:19 < petertodd> I guess you should be able to set your bloom filter to match nothing, then ask for sequences of blocks, and get just the headers pipelined 18:22 < gmaxwell> petertodd: I mean, getheaders works just like getblocks and should be able to pipeline. 18:23 < gmaxwell> I just didn't think it was being used that way; but its likely that I'm stupid 19:32 < amiller> so this should also work with other-than-proof of work 19:33 < amiller> suppose there are just two separately-trusted serializer entities like opentransaction servers or quorum or whatever 19:36 < amiller> eh i'll finish that thought later --- Log closed Tue Jul 09 00:00:22 2013