00:51:54 | adam3us: | who made the first altcoin? |
00:52:17 | belcher: | i believe it was namecoin |
00:52:42 | adam3us: | hmm ok nm. not making my case as its non-clone. oh well. |
00:52:57 | belcher: | litecoin is probably the second |
00:53:28 | adam3us: | nah there was one before litecoin, litecoin is a clone of it, it was by artForz and called?? |
00:53:37 | op_mul: | tenebrix. |
00:53:38 | adam3us: | tenebrix |
00:53:41 | adam3us: | there we go yes. |
00:53:52 | adam3us: | but i was imagining there were many more before? |
00:53:54 | op_mul: | http://mapofcoins.com/ |
00:54:43 | adam3us: | oho solidcoin SC or IXC |
00:55:48 | adam3us: | https://bitcointalk.org/index.php?topic=38453.0 |
00:56:30 | adam3us: | that'll do solidcoin is kind of infamous the one Luke-Jr killed :) |
00:56:41 | op_mul: | wasn't that coiled coin? |
00:56:51 | adam3us: | op_mul: sorry yes you're right |
01:13:17 | kanzure: | adam3us: does testnet count as an altcoin? |
01:21:02 | michagogo: | And how about Luke's TBC? |
01:21:21 | op_mul: | TBC isn't an altcoin, it's just a client that changes the rendering of the values. |
01:22:31 | michagogo: | op_mul: Luke would disagree :P |
01:22:42 | michagogo: | (I think I agree with you, though) |
01:25:12 | Luke-Jr: | gotta think it through :P |
01:59:50 | adam3us: | this was the reason why… https://bitcointalk.org/index.php?topic=911339.new#new |
02:07:40 | kanzure: | adam3us: hmm, i thought the reason why people made these forks was because they wanted a separate ledger to take advantage of people's inability to keep track of which ledger or blockchain they should be using. the supply function parameter fiddling is separate, i think. |
02:08:20 | adam3us: | kanzure: i'm pretty confident that if they were was no economic incentive they wouldnt have opened their compiler. |
02:11:32 | adam3us: | kanzure: my point is you can scale bitcoin and its better to use a single unit of account for network effect reasons, and really there is a simple method to value a coins cost and compare that to bitcoins, and yet people are paying huge multiples of that cost for less utility. |
02:12:33 | kanzure: | presumably they are paying those huge multiples because of the separate ledgers they are speculating on, not because they value proof-of-work bitcoin scarcity less than some speculative alternative they forgot to evaluate |
02:13:28 | adam3us: | kanzure: you have to hope most people are speculating knowing its a zero sum thing and hoping to get out before it crashes as there is no way most of these things could last long. |
02:13:44 | kanzure: | well, actually, it's quite possible that people are literally doing coin cost evaluations as you say, i suppose |
02:14:17 | adam3us: | kanzure: most alts have no wallets, no merchant integration, no transactions (other than speculating on cryptsy etc) |
02:14:24 | kanzure: | hmm so my alternative, which i admit i am not good at expressing, is related to how an onboarding user knows whether or not they are actually using the bitcoin blockchain (or any other blockchain) (or why they should prefer one blockchain to another for their commerce/transactions/business) |
02:14:58 | adam3us: | the other point about the pacman game is the features of the coin are separable from it |
02:15:07 | kanzure: | right, so basically it's obvious that they are hoping that others will also choose this other blockchain as a speculation on future integration |
02:15:14 | kanzure: | and it's not so much about the cost per coin |
02:15:46 | adam3us: | but the feature can be copied to any coin, so why would someone value a new coin |
02:16:12 | kanzure: | oh, for the incentive reasons you identified above, right? speculation on integration/adoption |
02:16:44 | kanzure: | *for the same |
02:17:22 | adam3us: | wouldnt it make sense to go play roulette or synthetic stock wars or something? |
02:17:34 | kanzure: | i agree with you that it's crazy, i'm just nitpicking about what they are probably thinking about (not so much the cost per coin specifically) |
02:18:01 | kanzure: | yeah, probably, but i doubt people have ridiculously easy access to that often |
02:18:15 | adam3us: | well if any of them are genuinely suckered into thinking black gold coin is going somewhere they're going to get ripped off. |
02:18:32 | kanzure: | one anecdote i keep bringing up is a buddy of mine after being introduced to bitcoin on day two became a day trader for some reason, the guy never did any forex or currency day trading in his life before that point |
02:19:05 | adam3us: | my wifes cousin got cold called by some boilerroomers trying to sell him black gold coin. he went to his bank and they told him they wouldnt send the money because the recipient account was blacklisted as probably fraud. (he's a bit of a sucker) |
02:19:46 | kanzure: | (when i pointed this out to him he realized the error he had made and then promptly forgot why he did that) |
02:20:02 | adam3us: | if you read the web page for black gold coin its talking crazy stuff, that the coin is backed by a virtual element called black gold. |
02:20:31 | kanzure: | wait, i remember now, it was something about "well if you sell and then buy back, you can make money obviously, duh" (i'm paraphrasing) |
02:20:51 | adam3us: | its gamified. cryptsy needs to unplug from actual coins and go pure synthetic virtual mining virtual coins that exist no where, and are not mined. |
02:21:09 | kanzure: | right, i think zynga should look into that for sure |
02:21:16 | kanzure: | that can be cryptsy's exit strategy! |
02:23:10 | kanzure: | i'm trying to think of any way for me to dig up data to confirm my statements above regarding "it's more about speculation on alternative-blockchain integration/adoption" but i dunno where i would find that, maybe deep in some awful bitcointalk threads... |
02:24:13 | kanzure: | i mean, clearly with on adoption the price per coin doesn't matter anyway, because you wont be able to sell it to anyone ever by definition |
02:24:19 | kanzure: | *without adoption |
02:24:40 | adam3us: | i mean it has to be one of 3 things or a combination right: knowing speculation, unknowing speculation/investment, or confusion about utility being attached to a coin as if code cant be copied. |
02:24:44 | kanzure: | oh maybe i meant *with zero adoption |
02:25:40 | adam3us: | yes. well a tell tale sign is spammy marketing drives. a guy jumped into the middle of a technical bitcointalk thread yesterday to pump som especial offer for getting stellar handouts. |
02:26:01 | adam3us: | give aways, marketing stories, professionally edited videos, nice websites, etc its completely transparent. |
02:26:18 | kanzure: | ah right, the giveaways are a good example to reason from |
02:28:00 | op_mul: | kanzure: I don't think crypsty needs an exit strategy. they make bank from the fees alone. |
02:34:26 | kanzure: | although, on the other hand, the integration speculation is much more obviously a bad bet |
02:34:41 | kanzure: | so you'd have to ask why it happens if that's what's going on |
02:35:06 | kanzure: | (compared to adam3us's notes about pricing of scarcity and quantum magic) |
02:35:41 | adam3us: | kanzure: there are some real suckers. my wife's cousin tried to wire these black coin boiler room people money. his bank protected him. they're non technical and heard of bitcoin. or they feel the mised out on early adoption and maybe this one is different. |
02:36:07 | adam3us: | kanzure: bryce weiner is a good example of a guy pumping stuff. he's always talking up some coin or other with faux analysis |
02:37:08 | op_mul: | adam3us: shut up and read some published content on how to evaluate altcoins. https://github.com/aantonop/bitcoinbook/blob/develop/ch09.asciidoc |
02:38:15 | adam3us: | op_mul: it must be right its written by bitcoin expert antonopoulos! |
02:38:27 | kanzure: | someone should definitely do a writeup of an explanation of how to evaluate altcoin opportunities, and include a factor like "technicalities that i am not familiar with" that has like a 99.9999% weight |
02:38:49 | op_mul: | adam3us: honestly give it a read, you'll be in tears. |
02:39:15 | op_mul: | some of the altcoins suggested don't even have consensus methods at all. |
02:39:19 | cpacia: | cpacia has left #bitcoin-wizards |
02:39:25 | adam3us: | op_mul: lol consensus innovation. i imagine its all broken. seriously someone should fork the crap out of it. or its completely centralised. |
02:40:00 | adam3us: | kanzure: so reading that.. what do you think the author was thinking? |
02:40:01 | op_mul: | "Since then, innovation in the consensus mechanism has continued at a frenetic pace. Several alt coins adopted a variety of algorithms such as scrypt, scrypt-N, Skein, Groestl, SHA3, X11, Blake, and others." |
02:40:26 | kanzure: | adam3us: reading what, sorry? |
02:40:38 | adam3us: | op_mul: link op_mul posted |
02:40:52 | kanzure: | "i have to be careful writing about this topic or else my audience will lynch me and i will lose face" |
02:41:30 | kanzure: | i'd imagine that some calculation like that went through his head for this section (are we sure he didn't just pay someone to write his book for him?) |
02:41:51 | adam3us: | i mean either they unbelievably naive or they're like the like people selling cheap plastic stuff on those us tv channels with nothing but marketing fluff for useless junk that'll break within hours |
02:41:59 | op_mul: | I think it's actually a list of the altcoins he has bought into. |
02:42:32 | adam3us: | omfg u think? well the git checkin has another name so maybe it was ghost written |
02:42:52 | op_mul: | adam3us: no, that's the "editor" (who missed spelling mistakes). |
02:42:54 | adam3us: | https://github.com/myarbrough |
02:43:02 | kanzure: | https://github.com/aantonop/bitcoinbook/commit/6d94cc23311ffa87289c9ad17cd4eec6c37db892 |
02:43:28 | kanzure: | https://github.com/aantonop/bitcoinbook/commits/develop/ch09.asciidoc?page=2 |
02:44:22 | adam3us: | Many are just get-rich-quick schemes by their creators. To evaluate alt coins, I look at their defining characteristics and their market metrics. |
02:44:39 | adam3us: | Here are some of the key financial and market metrics to consider: |
02:44:39 | adam3us: | • What is the total market capitalization of alt coin? |
02:44:39 | adam3us: | • How many estimated users/wallets does the alt coin have? |
02:44:39 | adam3us: | • How many merchants accept the alt coin? |
02:44:39 | adam3us: | • How many daily transactions (volume) are executed on the alt coin? |
02:44:40 | adam3us: | • How much value is transacted daily? |
02:45:05 | adam3us: | holy moly. |
02:45:08 | gmaxwell: | Does that list have any item in it that cna't be trivially faked? |
02:45:41 | adam3us: | sorry antonopoulos credibility just dropped |
02:45:49 | op_mul: | he had any to begin with? |
02:46:50 | kanzure: | op_mul: so do you think it is more likely that it is his list of altcoins he has bought/got paid in, or any chance that he was just trying to avoid getting lynched by altcoiners to keep them from torpedoing him? |
02:47:51 | adam3us: | i think he maybe went through a pro-alt phase |
02:48:08 | gmaxwell: | My expirence is that altcoiners will usually leave you alone if you leave them alone; you only get attacked if you mention something that might influence them... and deaththreats and such seem to take actually calling them out by name. |
02:48:18 | kanzure: | ah |
02:48:22 | op_mul: | kanzure: there was some comments on this from him when he published the first draft, lots of outcry that he didn't pump whatever altcoin. I think it's just a list of ones he owns, CureCoin and GridCoin for instance don't even have proof of work. |
02:48:52 | adam3us: | * adam3us sleeps |
02:48:53 | op_mul: | well they do, but it's not related to their stated aims of BOINC and folding@home. |
02:49:09 | kanzure: | adam3us: i think that to outsiders the "control freak" label is warranted because never have people been exposed to people being so critical about the differences between connecting to one server implementation or another for "blockchain data" |
02:49:33 | kanzure: | adam3us: it may be helpful to explain more generally why being on the same blockchain is so important. this may not be obvious to others. |
02:49:43 | kanzure: | (important to users, i mean) |
02:49:54 | kanzure: | or rather, why it should be important to users |
02:51:57 | gmaxwell: | I've had pretty good luck in 1:1 discussions impressing on people some of the scope and challenge of consistency in consensus implementations; and the significant dangers of reimplementing. But 1:1 doesn't scale, and any time I've said anything not 1:1 I've been treated pretty rudely by people who zip to simpler assumptions about my motivations. |
02:54:25 | op_mul: | can anybody actually defend implementations? as far as I can see it's all negatives. |
02:54:52 | kanzure: | eh, learning tool? as long as nobody uses it |
02:56:01 | op_mul: | pretty rough learning tool |
02:56:14 | kanzure: | reimplementing something is often a good way to spot your own errors |
02:56:49 | gmaxwell: | Like a lot of things, it's just not worth talking about. Somewhat similar to the issue with altcoins in fact; if I see some altcoin is doing something I think is incorrect or outright dishonest and I speak up; I'm taking a risk that I'll be attacked with things like death threats. About the most I can safely do is speak generically so that no one is threatened. (By expirement I've found some gene |
02:56:55 | gmaxwell: | ral patterns where I can go beyond that without trouble, but it's generally true) ... and ... I personally gain _nothing_ from pointing out why this or that claim is probably spurrious; at least nothing beyond the amorphous feeling of maybe helping my fellow man not make a bad decision. |
02:58:33 | gmaxwell: | kanzure: yes, its a great documentation excercise to rewrite something and throw the result out. Though I'm surprised and concerned that of the reimplementers only bluematt and roconnor seems to have discovered and reported a considerable amount of latent behavior. |
02:58:47 | BlueMatt: | "considerable" |
02:58:49 | op_mul: | kanzure: sure, but for something like bitcoin reimplementations it's a hell of a lot of work to get to the point where you can actually find anything interesting. |
02:58:56 | kanzure: | what counts as considerable here, BlueMatt |
02:59:20 | BlueMatt: | notice even minor things and actually report them |
03:00:30 | gmaxwell: | kanzure: considerable may be basically more than zero. |
03:01:01 | kanzure: | wonderful |
03:41:15 | jcluck: | jcluck is now known as cluckj |
05:22:34 | bramc: | Hey everybody |
05:22:46 | kanzure: | hello |
05:24:44 | bramc: | It turns out there's an interesting approach to tackling the electricity problem |
05:25:14 | maaku: | bramc: the electricity problem? |
05:25:49 | bramc: | maaku, the problem that mining inevitably winds up producing economic waste equal to the value of the rewards because it gets spent on electricity |
05:26:13 | op_mul: | I'd call that a core function rather than a problem |
05:26:18 | bramc: | The potential loophole is to use a calculation which uses no electricity and no time |
05:27:06 | bramc: | op_mul, The use of resources is a core function, but ideally the calculation would demonstrate depreciation of already extant resources rather than require the sinking of new resources |
05:31:30 | bramc: | nobody seems curious about my idea |
05:32:02 | gwillen: | I think we're all assuming you're about to provide it and we're waiting to hear it |
05:32:19 | bramc: | gwillen, fair enough |
05:32:35 | kanzure: | and i think you probably lost maaku already :p |
05:32:50 | kanzure: | given his previous thoughts regarding the universal scarcity of entropy |
05:32:52 | maaku: | i will hold off my cynacism until he at least explains his idea :P |
05:32:54 | gwillen: | (I'm curious but skeptical of the idea of demonstrating consumption of an existing resource, in a way where that demonstration can't be reused, without wasting any new resources.) |
05:33:11 | kanzure: | maaku: no, no, your cynacism amuses me and is totally necessary |
05:34:26 | bramc: | The various attempts at asic resistance have focused on making the calculation use memory, which makes a fair amount of sense: If you spend time waiting for memory you aren't using electricity during that time, and memory is much more commodity than CPU calculations are |
05:34:28 | maaku: | i've gotten enough use out of bittorrent to allow bramc a few cycles of my time in good faith |
05:35:10 | bramc: | The other idea I suggested before is to use proofs of sequential work, aka proofs of time, as essentially placeholders where everybody sits around doing nothing until the proof of time is done |
05:36:19 | bramc: | These both can do a probably pretty good job of asic resistance, but I think they don't go far enough |
05:37:03 | phantomcircuit: | gmaxwell, fewer people getting burned by altcoin schemes leaves more people who are still interested in genuine advancements |
05:37:06 | phantomcircuit: | ut |
05:37:18 | kanzure: | what were your prior objections to the objections to asic resistance? |
05:37:21 | phantomcircuit: | improvements to the commons at personal expense and what not |
05:37:24 | bramc: | What you need is a proof of *storage*, because storage is vastly more commodity than even memory is: there's basically no way to optimize it more than is already done. and there's lots of it always sitting out there depreciating. And there's a simple proof of storage calculation which can be done. Embarassingly trivial, really. |
05:37:49 | phantomcircuit: | bramc, except actually using storage is often very expensive |
05:37:50 | kanzure: | salting isn't enough |
05:38:44 | bramc: | kanzure, I have nothing against asic resistance, it's a worthy thing, the problem is that it doesn't fix the problem that electricity will be essentially wasted on mining, exactly in accordance with what the mining rewards are |
05:38:56 | kanzure: | yes i know you have nothing against asic resistance |
05:39:01 | kanzure: | i was asking about your objections to the objections to asic resistance |
05:39:49 | bramc: | phantomcircuit, hold on, I'm getting to how to design the proofs of work so that basically nothing other than storage is useful |
05:39:51 | gmaxwell: | bramc: do you mean https://bitcointalk.org/index.php?topic=310323.0 or more like permacoin? |
05:40:34 | bramc: | gmaxwell, no nothing like any of those |
05:41:27 | phantomcircuit: | bramc, so that the storage is useful, but not access times for that storage? |
05:42:02 | bramc: | Now this gets into the part which is really weird. Absolutely central to my idea is alternation between proofs of storage and proofs of time, with essentially no time at all being used in the proofs of storage and all of it spent in the proofs of time, but the two interacting |
05:43:59 | bramc: | Here's the proof of storage, which might help clarify or make what I'm saying even more mysterious: The proof of storage is a public key whose hash when xored to the hash of the previous proof of time is as small as possible |
05:44:44 | bramc: | To initialize everything, you fill up all available storage with a list of public keys sorted by their secure hash, the more the better |
05:45:41 | bramc: | (there are some constant factor improvements on that, but I'll skip over them for now because they don't change anything fundamentally) |
05:46:56 | bramc: | After you find out the output of the previous proof of time, you go look up the best proof of storage you have locally, use it to sign a new set of transactions, and start giving these to your peers. You re-transmit any record breaking one which peers might send you |
05:48:44 | gmaxwell: | bramc: I'm not folling your construction. From what you've described, I can work that function with no memory by grinding public keys instead of sequential POW. (e.g. compute one sequential POW, now generate unbounded pubkeys for a match) |
05:48:49 | bramc: | Meanwhile some people out there have supercooled overclocked machines which are working on the proofs of time. They do a proof of time on the best proof of storage anybody puts out (there's no direct reward for finding a proof of time) |
05:49:07 | kanzure: | wasn't there a vanitygen grinder proposal at some point |
05:50:09 | bramc: | gmaxwell, Yes you can do the same calculation by grinding keys, the problem is that takes time, while doing a lookup of storage is instantaneous, and the interaction with the proofs of time makes the grinding approach lose badly, because it falls behind on time |
05:51:32 | op_mul: | bramc: there is actually an altcoin that does proof of storage. |
05:53:31 | op_mul: | it has users pre-compute tables of H(nonce + pubkey), and then uses that to salt the previous block hash. the more nonces you have in your lookup table, the more "hashpower" you have. |
05:54:18 | gmaxwell: | bramc: uh access times are not at all instant; abstractly storage capacity scales much better than throughput/latency due to difference between volume and surface area. I can generate on the order of 20 million pubkeys per second on a single cpu. It's not that hard to beat out storage access times. |
05:55:07 | bramc: | The interaction with proof of time is that there's a single composite difficulty function for proofs of storage and proofs of time, and alternating steps are paired and their combined value has to be over the threshold. The formula is you take the value of the hash of the mined value of the public key taken from proof of storage, xor it with the output of the last cycle, take 1/x, then multiply that by the number of iter |
05:55:07 | bramc: | ations the proof of time went through. That gives a composite value which has to be greater than the target |
05:57:14 | op_mul: | gmaxwell: burstcoin tries to get around that by splitting the table up into "plots" of 4096 based on some feature of the previous block hash. you only end up reading one 4096th of the total data as a result. in their case it's not faster to mine it PoW style, or at least not sensible. |
05:57:34 | bramc: | gmaxwell, Let's say that a machine has a terabyte of data. With some optimizations that can effectively store 250 billion sorted keys. If you have 10 cpus generating 20 million keys per second, that's 200 million keys/second, so it will take you about 1000 seconds to grind through that amount of public keys. Time to look it up after it's pre-calculated: a few milliseconds |
05:58:08 | bramc: | op_mul, I haven't heard of burstcoin before, I thought you were talking about permacoin, which does a completely different thing |
05:58:30 | phantomcircuit: | gmaxwell, not to mention building custom systems optimized specifically for massive concurrent random io access is pretty trivial |
05:58:33 | bramc: | Although that plots thing is much less effective than what I'm talking about |
05:59:08 | op_mul: | bramc: be warned, it's a shitty NXT fork. https://bitcointalk.org/index.php?topic=731923.0 |
06:00:29 | gmaxwell: | bramc: so what you're saying is that you do a sequential-POW and then you find the nearest pubkey to have to that output value, and the closeness of the match determines the work-factor of your composite solution? |
06:00:41 | phantomcircuit: | bramc, asic resistance is whatever, but you really cant assume that other operations cannot be optimized just as much with custom hardware |
06:01:09 | bramc: | gmaxwell, the closeness of the match determines how much work will be required in the next sequential-POW |
06:02:06 | phantomcircuit: | actually i wonder if there's a commercial solution available for that |
06:02:08 | bramc: | phantomcircuit, the idea here is that it's dodging the whole issue, the only thing any time is spent on is the sequential-pow |
06:02:12 | gmaxwell: | bramc: so there is a weird surface there of storage vs sequential speed that have the same apparent performance. |
06:03:15 | bramc: | gmaxwell, not sure what you mean. The assumption is that the proof of storage is published and anyone running a sequential-pow runs it on whatever thing will result in them finishing fastest |
06:03:30 | bramc: | and as soon as they finish they blast it out to everybody else |
06:03:46 | gmaxwell: | So I can fork this to make it parallel. E.g. I make a step but then select the _two_ nearest pubkeys, and start two sequential processes. And then find four new solutions; and then I cull to pick the two best of those. |
06:04:13 | gmaxwell: | (and of course that same idea expands to whatever level of parallelism I have available) |
06:05:17 | bramc: | gmaxwell, yes that's what I'm working through now. If you make it so that the sequential-pow is done on the 50th best proof of storage that attack doesn't work very well |
06:06:14 | gmaxwell: | bramc: the surface comment just meant that if someone had a N times faster sequential processor they would need less storage (not n times less, there is a sqrt somewhere in the formula, but I haven't bothered to figure it out) |
06:07:31 | bramc: | Oh, yeah, of course you can mine faster if you have a faster sequential-pow. The idea is (a) the sequential-pow will be one which is hard to beat, and (b) to the extent that multiple counterparties all beat it, they wind up screwing each other in mutually assured destruction |
06:07:48 | gmaxwell: | As far as publishing, well you can't guarentee that an adversarial party will publish anything, esp if it might not be in their interest to do so. |
06:08:05 | bramc: | because whoever has the next-fastest sequential-pow publishes it as soon as they can to stop there being an advantage to the fastest sequential-pow |
06:08:56 | bramc: | Sort of like all those VCs thinking they're going to make custom ASICs and get 60% of all mining capacity :-) |
06:09:00 | gmaxwell: | as an aside this 'helpfully' incentivizes the construction of industrial scale capacity for cracking the public key cryptosystem. |
06:09:26 | bramc: | Let's assume that the public key cryptosystem is secure |
06:09:35 | gmaxwell: | assuming they use the capacity to publish instead of instead using it to explore alternative forks to find better hits on their own pubkeys. |
06:10:45 | bramc: | They lose more from somebody else getting hits faster than they gain from finding faster hits themselves. At least I think that's how it works. I haven't finished analyzing this whole thing |
06:11:48 | bramc: | I'm a little more concerned with attacks where somebody takes the nth best pubkey they have and uses that because it results in a much better next generation pubkey for themselves, hence my comment about the 50th best pubkey setting the work factor |
06:12:25 | gmaxwell: | I thought your goal there was to achieve a state where basically anyone could run the sequential part about equally fast thus ensuring that basically no one runs it. Oh I see you want to advertise your hit as fast as possible to get other sequential parties working for you. |
06:13:09 | gmaxwell: | yea, that was my comment about exploring multiple paths. I think you can use parallel lookahead to get an unbounded speedup in the overall game. |
06:13:23 | bramc: | Yes the sequential part is designed to try to be asic resistant as well, although that of course has some level of attackability |
06:14:25 | bramc: | I think I can get parallel lookahead to not work very well, and yeah the idea is that everybody should be incented to blast out their hits both on proof of storage and sequential proof of work as quickly as possible |
06:14:54 | bramc: | I think if you use the nth best then someone who only has a fraction of the storage capacity loses fairly quickly |
06:14:57 | gmaxwell: | e.g. say I'm an attacker divorced from the public system, for the moment, and I can run the sequential program just as fast as anyone else, and my storage is typical; but I also can run the sequential 10,000x in parallel, so I just keep exploring the top 10000 total cumulative work paths, hoping to find a very lucky enumeration that takes me further ahead of the network. |
06:15:30 | gmaxwell: | I don't follow how you would enforce the Nth best? |
06:16:00 | op_mul: | gmaxwell: I thought that might be possible for burst coin too. same sort of thing as stake grinding, in effect. |
06:16:22 | maaku: | what are the supposed benefits of this scheme? |
06:16:46 | maaku: | i thought it was supposed to avoid energy wastage, but we're down in the weeds talking about grinding strategies here |
06:17:01 | bramc: | In addition to screaming out the very best storage hit, I co-sign it with my n-1 best matches, unless there are some better matches out there which have been sent to me, in which case I send those out. The work factor is determined by the value of the nth best |
06:17:01 | gmaxwell: | it's energy efficient. The scheme rewards you according to the storage*time you have. ... at least if it prevented grinding strategies. |
06:17:33 | bramc: | gmaxwell, Yes exactly, and I think I can block grinding strategies |
06:19:04 | bramc: | if only the very best one is used then grinding works well, if the 50 best are used grinding just fails, because the 50th best match across one's own storage will always be roughly the same |
06:19:29 | gmaxwell: | okay, but instead of my N-1 I skip every other one. and that allows me two solutions, one with my best and one with my second best, and one where the work factor is the 100th and one where it's the 101st. The fall-off is linear. I don't think it cancels out, because at the end you end up with four new solutiosn not two. |
06:20:14 | bramc: | Now I'm really not following you |
06:21:46 | gmaxwell: | bramc: I generate two solutions, one with (1,3,5...100) and one with (2,4,6,..101). ~100 is half as good as 50, but I can double my storage and be exactly back where I started in terms of sequential throughput, but now I get an alternative history that might be better for me. |
06:22:39 | bramc: | I don't think I've made my 50th best idea clear |
06:23:11 | gmaxwell: | I think you have? you don't just give your best, you give 50 monotonically worsening solutions, and the 50th is used for the difficulty of the next step. |
06:24:09 | bramc: | The idea is that a proof of storage consists of a single very well matching key signing a transaction log, followed by 49 other not quite so well matching keys signing the same transaction log, and the amount of time which must be spent on the sequential-pow is based on the worst of those keys |
06:24:29 | gmaxwell: | And I show that you can split your deck, taking the 100 top solutions and partitioning in two, to get two answers. And if your storage is doubled your 100th solution is as good as the 50th would have been, but now you get the parallel lookahead advantage. |
06:25:31 | bramc: | Oh you don't need to partition them that way, you can just make one by 1, 2, 3 ... 50 and the other by 1, 2, 3 ... 48, 49, 51. You can then run lots of small variants in parallel to see which one is best for your own storage |
06:25:36 | gmaxwell: | I don't think what they sign matters at all, why would you even bother signing with anything but the first? or the 50th? The matching formula has a strong one way function in it. |
06:26:19 | gmaxwell: | bramc: ah, indeed, I was bogusly assuming that they needed to be disjoint. |
06:27:06 | bramc: | The problem is that with the 50th best none of them will be a big standout. If you have a small fraction of the total storage capacity the chances of getting lucky with a major standout are like one in a trillion |
06:27:38 | bramc: | I made up that one in a trillion. I'm running some simulations and really need to do some math at some point |
06:27:52 | gmaxwell: | Oh I wasn't assuming a gain from the raw luck, I was assuming a gain from parallelism. |
06:28:16 | gmaxwell: | E.g. what I was suggesting was a way to convert parallel computation into profit for your presumably sequential system. |
06:28:25 | bramc: | The gain from parallelism has to do with what your chances of getting lucky are. |
06:29:04 | bramc: | Yes and with the simple single best formula that approach wrecks everything. But with the nth best the parallelism only buys you a very modest advantage unless you almost have a majority to begin with |
06:30:00 | gmaxwell: | in any case wrt luck the density of values withing distance X of something is a linear ramp. (it's (2*x)/N) |
06:31:31 | bramc: | Yes, so if you have 10 times as much storage you expect it to be *on average* a tenth the value, what you get from the parallelism is the ability to pull out the best outlier for yourself. If it's enough samples in the outliers don't outly very much. |
06:32:15 | bramc: | unless i make all my stored pubkeys be clustered... that might throw a spanner in the works |
06:34:12 | gmaxwell: | yea, thats an interesting point. Make your pubkeys non-uniform. Well it's even worse, because you can probably build something that looks like a rainbow table spanning subspaces of pubkeys. E.g. disk index X begins a chain of pubkeys which are super rich in solutions near x. You need to do a bunch more computation for this but computation is cheap. |
06:34:39 | bramc: | That would of course require a lot more generation time to fill my storage, but that doesn't sound terribly hard to deal with |
06:35:10 | gmaxwell: | e.g. you start a key X and you determinstically step from one key to the next. And after some number of steps (say 100ms worth) you compute the centroid of the chainlet. and you store that chain root under that value. |
06:35:34 | gmaxwell: | you might perform a K medians clustering and store multiple heads to it as well. |
06:35:57 | gmaxwell: | e.g. this chunk of computation has good solutions for values near X, Y, or Z. |
06:37:11 | gmaxwell: | and you do the lookup, repeat the computation, and tada, you get lots of solutions near your target, also this expensive computation is embarassingly parallel, since you can be grinding on many chunks at once. |
06:37:14 | bramc: | Not sure what you mean, but the attacks are straightforward. If I spend as much cpu filling my storage as would be necessary to fill everybody else's storage, I can cluster my pubkeys well enough to do parallelism to find good matches for myself |
06:37:27 | gmaxwell: | And you can update your storage over time with increasingly more optimized chainlet nodes. |
06:38:00 | gmaxwell: | bramc: I'm explaining another kind of tradeoff that lets you do more computation to get a superlinear gain from your storage. |
06:38:33 | bramc: | oh I'm not following. I'm worrying about the one I said because it seems fairly devastating but would like to know yours as well |
06:38:50 | gmaxwell: | That instead of storing a map of hash->pubkey you store a map of hash->{family of pubkeys that are more near to hash than you'd expect}. |
06:39:14 | gmaxwell: | What I'm thinking about is kind of inspired by what you are thinking about (assuming I understood it) |
06:40:25 | bramc: | Oh you keep everything sorted, the assumption is that everybody does that. You also do the optimization that instead of storing the whole public key, you have some personal secret salt which is used in your keygen, and what's stored is the four bytes which are combined with the salt to generate the pubkey |
06:40:37 | gmaxwell: | no I don't. I'm not even storing the family. |
06:41:37 | bramc: | I'm not following. I'm thinking you basically bucket sort and then look up your best match by going to the location, generating the pubkey, and seeing how good it is |
06:44:23 | bramc: | Oh I know, the last hash can generate 50 points which you need to be close to instead of a single one |
06:44:43 | gmaxwell: | E.g. a narrower version that might be easier to explain. I compute pubkeys, I pick a random pubkey and check its hash, then I use that first pubkey as a PRNG seed and generate many more pubkeys (perhaps a million of them) and I sort the set, and if position 50 is sufficiently near position 1, I accept this chainlet and store hash->pubkey in my database. |
06:45:00 | gmaxwell: | or a million points where you require the best 50 to be close. |
06:45:37 | gmaxwell: | and you can constantly refine your storage by running this process and once you're full replacing the nearest match to pubkey1's hash with your new chainlet if your new chainlet is better. |
06:46:20 | gmaxwell: | kind of a beautiful algorithim. Wish I had a use for it. You should go deploy this system so I can profit from this optimization. :P |
06:46:39 | bramc: | Here's the thing: position 50 is never near position 1 |
06:46:56 | gmaxwell: | and it's no worse than the boring way of doing it, since you can just try the 50 nearest tips too. |
06:47:18 | bramc: | you basically never get particularly good outliers |
06:47:53 | gmaxwell: | the only thing extra is the extra computation for the million derrived points which is just grinding. Also if the derriviation structure is right you get a sqrt() gain because every internal step can be P1 of a new chainlet. |
06:48:19 | bramc: | and let's move on to my new idea: that the old sequential-pow specifies 50 positions, and you have to find a nearest match for those 50, and value is based on the worst one |
06:49:12 | bramc: | The thing you're saying doesn't work, I was just running simulations of basically the same thing and position 50 is always way off. |
06:49:27 | gmaxwell: | lol |
06:49:33 | gmaxwell: | go ahead, deploy it then. |
06:49:45 | bramc: | No I want to understand what you're saying |
06:49:47 | gmaxwell: | It really does work. And it is strictly no worse for a given amount of computation. |
06:50:11 | gmaxwell: | Think of if this way, say you have an already computed storage table. |
06:50:58 | bramc: | A storage table being a sorted list of compressed pubkeys |
06:51:20 | gmaxwell: | You're out of space. Then you do some more computation and find a new pubkey for slot X in your storage table. You ask yourself, "is this pubkey the head of a family of pubkeys which is closer to slot X than the one there now?" If so, you replace it. If not you don't. |
06:52:06 | gmaxwell: | When you do a lookup you don't just look at the nearest and its 50 neighbors, you also expand out (using parallel computation) the neighbors which have, over time, been refined to be closer to their named positions. |
06:53:03 | bramc: | Yeah that doesn't work: the chances of the family being closer are basically nil. But also I decided to change from the '50th closest' approach because of the other attack I said, so now I'm thinking of using 50 separate points |
06:53:03 | gmaxwell: | when you sort that list, it has the same 50 members the plain solution would have had, but it also has some additional close members from the process of refinement. |
06:54:04 | gmaxwell: | it's only basically nil for a single computation run, but it increases the more computation you do for the exact same reason your storage increases. And the computation can be amortized, because each pubkey you generate in the family is another potential root. |
06:54:10 | bramc: | That said, you might be onto something which makes for a reasonable constant factor improvement in your storage effectiveness, if you really are willing to grind a lot |
06:54:38 | gmaxwell: | Yes, it's a constant factor related to your additional cumulative grinding time. |
06:54:43 | gmaxwell: | it's like a rainbow table. |
06:54:44 | bramc: | Let's forget about the clustering thing now and think about it just as points |
06:55:09 | bramc: | Because I'm not going to use clustering, because of a much more devastating attack |
06:55:59 | bramc: | At point k in my table I can either just put something which starts with k there, or I can put something which starts with k and has a hash which is also close to k |
06:56:17 | bramc: | I mean, one of its near descendants is close to k |
06:56:22 | gmaxwell: | so okay, what, your squential proof of work selects now 50 points for which you return the nearnest for each and sum their work to get the work of the next sequential? |
06:57:16 | bramc: | not exactly, the sequential proof selects 50 points which then have to be paired to 50 public keys, where the amount of time for the next sequential is based on the worst match of the 50 |
06:57:20 | gmaxwell: | If so, thats easy to break, becauase you can swap out each of the 50 with the next nearest one at a time and get 50 solutions with approximately the same goodness. |
06:58:05 | gmaxwell: | okay so I take the worst and then replace it with the second best for that worst position, and get two solutions which are almost equally as hard, that I can explore in parallel. |
06:58:07 | bramc: | my contention is that that won't particularly help you attack. |
06:58:52 | gmaxwell: | getting another set of 50 points is like doubling my storage. |
06:58:57 | bramc: | Right but you can only explore them in parallel on your own storage. I think the kind of attack you're describing is one which improves how much you're getting out of your own storage but doesn't particularly benefit from trying to make a fork |
06:59:55 | gmaxwell: | bramc: your whole scheme is supposted to give authority proportional to storage, so if I can play stretegically using parallel computation, I get more authority in my alternative universe. |
07:00:09 | bramc: | I believe what you're saying is basically at each point get double your utility by storing a value which when used to make multiple derived values has two of them which are very close to the desired value |
07:00:55 | gmaxwell: | I've left that attack and I'm now talking about your 50 non-clustered points. |
07:01:36 | gmaxwell: | I'm not talking about any 'rainbow table gain' at this point (though I suppose that still remains) |
07:02:44 | bramc: | Sorry I was just dealing with a kid having a meltdown, back again |
07:03:54 | gmaxwell: | you have many instances of the fast sequential process running. when one yeids an answer you use your storage to produce many candidate solutions (by computing the 50 answers, finding the worst and then finding the next best around that wost which is only slightly worst than your prior worst)... you take all these and sort them, and update your parallel processor setup running the SPOW to continu |
07:04:00 | gmaxwell: | e exploring the N best paths found so far. |
07:04:07 | bramc: | I'm not sure what your attack is. I'm sure there's some amount of an 'attack' using the stuff I was just talking about, but whether that's a real attack or just reasonable best practices I don't know |
07:05:38 | gmaxwell: | What I'm describing is somethign that lets someone who can run many copies of the SPOW in parallel gain a potential advantage. Perhaps one copy they'd use publically to keep partitipcating the the honest network, but they keep the others going privately. Maybe this is a "best practice" but then I think the scheme just reduces to convoluted POW. |
07:05:47 | bramc: | Yes, so for some amount of running in parallel you can get some factor better than what your storage 'should' get based on its size |
07:07:00 | bramc: | The thing I was just talking about is totally unrelated to running the spow |
07:09:08 | bramc: | Let's say we take the most effective case for running parallel spows. There's some public thing which has just happened, giving 50 public keys, I can substitute any of those with my own next nearest, which might not be so far off (although if I only have a small fraction of storage it will be). I can use those small variants to get a whole bunch of spow outputs and see how well they'll work for me. |
07:10:15 | bramc: | The problem is that those spow outputs aren't going to be good for me at all. If I have 1/10 the storage capacity, then my worst of 50 will essentially always be fairly close to 10 times worse than what the public network will generate |
07:11:07 | gmaxwell: | forget 1/10th the storage. lets assume everyone is equal. equal storage. equal computation. equal spow speed. Except you have multiple spow processors and can run the spow in parallel. |
07:11:23 | bramc: | so you have like 1/1000 of all storage |
07:12:54 | bramc: | I understand what you just said, but don't see an attack under those circumstances |
07:15:18 | gmaxwell: | I say you get a substantial benefit relative to other people from doing so. In fact, I think you get benefit linearly proportional to your spow parallelism. Lets make your parallelism unbounded. You have N entries in storage. When the network steps you compute the best solution and make it public, you then compute N-50 SPOWs. When any spow (network or local completes) you compute N-50 more candi |
07:15:24 | gmaxwell: | date starting solutions, and you terminate replace any of your N-50 parallel processes that have lower expected returns than your new fodder. |
07:16:26 | bramc: | Having an edge from a historical starting point doesn't really help. Everybody's ahead of you |
07:16:38 | gmaxwell: | in the limit (assuming storage goes to infinity) your speedup from this strategy is infinite after one additional step (depth 1 reorg). |
07:17:04 | gmaxwell: | because some lucky draw will be a BAD step, then a bunch of awesome steps, and you reorg the network. |
07:18:12 | bramc: | Oh, ummm, an important point here: the SPOW is based on the hash of the previous proof of storage, you can't have them sitting around for use later |
07:18:31 | gmaxwell: | I do agree that the storage in practice bounds your parallism but it means that parties with big cpu farms would have an advantage, and worse, part of that advantage can only be used for attacks, because it'll be late to the party. |
07:19:21 | gmaxwell: | I know, thats why you're exploring many possibile futures. e.g. one where the step was X, one where it was the slightly worse solution Y, one where it was the slightly worse solution Z, and so on. |
07:20:33 | gmaxwell: | Maybe unlucky Z's answer turns out to be a really lucky draw that yealds a very cheap SPOW for the step after that. Won't know unless you try it. |
07:21:01 | bramc: | Yeah so you find those and they're all worse than what the public network has found, so if you're doing a spow yourself for the follow-on generation, and the network has already gotten ahead of you... |
07:22:26 | gmaxwell: | if they're worse than what the network as found, thats okay. when the network finds something you check each of your brewing SPOWs and ask if network_next_work < work_left_on_this_one+average_work and if so replace it with a next step based on the network. |
07:22:29 | bramc: | Even after the very first iteration of parallel spows every single one of those will finish behind the public network (unless you find one that's a bit ahead and give up your share of public mining rewards to play this little game) |
07:22:56 | gmaxwell: | so yes, you'll often be replacing most of your parallel processes with work based on the network. |
07:24:06 | bramc: | That can only be done if you've hardly forked at all. Are you suggesting an attack where when the network is at generation X, you figure out a slightly worse version of generation X, which makes you personally able to do generation X+1 faster, and then you introduce that to the network? |
07:25:18 | gmaxwell: | Yes, and then you make it to generation X+1 faster than the network, and so it reorgs onto your state... but not just X+1 you have many parallel processors and with exponenitally decreasing probablity you may be the network at X+2, or X+3 or so on. |
07:26:11 | gmaxwell: | Your goal: increase my density of blocks in the longest chain, at any cost. Your resources: fixed storage, same as anyone else, and the ability to buy as much parallel SPOW as you like. |
07:26:40 | bramc: | There's some tradeoff here where based on what fraction of all mining capacity you have and how many parallel operations you have and what the N is which we've been discussing as 50 for arguments sake you can get some amount of advantage, but running the numbers myself the effect seems extremely weak |
07:27:09 | gmaxwell: | What is the optimal strategy? I argue that its one that incentivizes producing reorgs (potentially long ones) based on using the parallel processing to consider alternative histories beyond the 'best' one that you and the network are also working on. |
07:29:54 | bramc: | If I have 1/x of the total storage, and I run just 1 extra SPOW, it will result in me taking on average x times as long to do the next spow if I do everything based on my own storage |
07:30:49 | bramc: | So the question becomes, how much more favorable does that get with using lots of extra parallel SPOW? And I think the answer is not very much |
07:32:36 | bramc: | It certainly isn't the case that doubling the amount of parallel SPOW halves the amount of expected work. That would be the case if you were only using one point, hence using more points |
07:34:17 | gmaxwell: | I think with infinite parallel spow the benefit is actually infinite at least when everone's storage is also infinite. Basically you lose this round, but one of your next steps which takes episilon longer than the honest network, ends up with a perfect match, and then do the next set of SPOW in time 0 and end up producing infinite blocks before the honest network has produces two. |
07:34:34 | gmaxwell: | But perhaps thats the kind of crazy result when you look at the limits at infinity. :P |
07:35:24 | bramc: | A rough example calculation is that the chances of getting less than half of your own expected difficulty of spow for the next generation is 1/2 for getting under the limit for each separate point, so that's 1/2**50 for even a speedup of 2, over your own expected difficulty much less the difficulty of everybody else |
07:35:59 | bramc: | Yes infinite parallel spow yields infinite advantage, but you can't have infinite parallel spow, that requires resources |
07:36:36 | gmaxwell: | Well the point of the infinite is to say that this process (at least when storage is sufficiently large) reduces to plain POW. |
07:36:53 | bramc: | So we can guess that you don't have, say 2 ** 64 spow, and even that is completely ridiculous, you probably don't even have 2**32 parallel spow |
07:39:00 | bramc: | It doesn't reduce to plain pow because the costs are such that you're burning more on spow than the value you're gaining from it. If a linear relationship held up that would be true, but it's set up so that running parallel spow is mostly just wasting resources unless you already have a majority of storage capacity |
07:39:45 | bramc: | This part I can work out some more math on and justify straightforwardly. I need to do some serious thinking about cpu tradeoffs for getting more out of storage though. |
07:51:09 | bramc: | Well, I have some tweaks to make and things to work through, thanks for the feedback |
07:51:26 | gmaxwell: | Interesting discussion. I do think this class of algorithims gives me a useful solution to a different problem that I've been working on for some time. And that is creating an anonymous cumulative work hashcash. E.g. a hashcash which you can use up to N times concurrently, and infinite times sequentially; without anyone being able to link the work togeather (so long as you don't break the concur |
07:51:32 | gmaxwell: | rency bound). I have a couple constuctions that achieve this to different levels, though the best are kinda ugly and require snarks. |
07:51:50 | gmaxwell: | from this general class of ideas I can extract some better ones, though they do have unfriendly TMTO tradeoffs. |
07:52:20 | bramc: | I'm not sure what you're describing, but it sounds interesting, and I'm happy if my thoughts help |
07:53:09 | gmaxwell: | No problem on the feedback, good luck. |
07:59:06 | go1111111: | FYI, the last logs from this channel are from over 36 hours ago, at https://botbot.me/freenode/bitcoin-wizards/ (was thinking it'd be nice if the above discussion was logged) |
07:59:55 | op_mul: | go1111111: https://download.wpsoftware.net/bitcoin/wizards/2015-01-02.html |
08:00:08 | op_mul: | someone might want to give botbot a prod though. |
09:05:17 | sinisalo.freenode.net: | topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja |
09:05:17 | sinisalo.freenode.net: | Users on #bitcoin-wizards: andy-logbot e1782d11df4c9914 Shiftos pgokeeffe CoinMuncher koshii rasengan TheSeven Tjopper1 eslbaer__ cluckj ysangkok Adlai` MoALTz_ Aquent1 fanquake wiz Elio20 shesek null_radix NewLiberty ebfull mkarrer JonTitor pigeons pi07r_ OneFixt o3u roasbeef_ Sub|afk Keefe_ hktud0 roconnor mortale tacotime orwx grandmaster luny Starduster_ digitalmagus8 Transisto Emcy op_mul hollandais BigBitz_ [\\\] michagogo Muis_ mappum ryanxcharles epscy gnusha |
09:05:17 | sinisalo.freenode.net: | Users on #bitcoin-wizards: poggy _Iriez Meeh_ otoburb mr_burdell s1w go1111111 bsm117532 HM2 Apocalyptic sdaftuar waxwing btcdrak CryptOprah kumavis btc__ jbenet nuke1989 samson_ nsh adam3us HarusameNyanko Guest19027 jcorgan wizkid057 PaulCapestany maaku forrestv jgarzik bitbumper Luke-Jr rfreeman_w tromp_ PRab c0rw1n hashtagg LarsLarsen optimator_ Greed Dyaheon lclc_bnc sl01 spinza harrow gmaxwell gavinandresen NikolaiToryzin Cory copumpkin isis iddo jaekwon sadgit |
09:05:17 | sinisalo.freenode.net: | Users on #bitcoin-wizards: brand0 eric Krellan berndj @ChanServ jaromil petertodd tromp DougieBot5000 helo v3Rve midnightmagic espes__ fluffypony butters nickler Logicwax lnovy ahmed_ warren fenn phantomcircuit Graet morcos kanzure gribble MRL-Relay Graftec bobke huseby [d__d] BananaLotus amiller artifexd coryfields coutts BlueMatt cfields Anduck Eliel nanotube hguux_ AdrianG gwillen wumpus stonecoldpat dansmith_btc toddf heath bbrittain DoctorBTC EasyAt starsoccer |
09:05:17 | sinisalo.freenode.net: | Users on #bitcoin-wizards: danneu catcow TD-Linux ryan-c smooth mmozeiko Alanius asoltys_ K1773R a5m0 comboy Nightwolf andytoshi lechuga_ warptangent Guest38445 kinlo azariah yoleaux sneak harrigan sipa livegnik burcin Taek throughnothing crescendo so phedny BrainOverfl0w |
14:27:51 | wallet421: | wallet421 is now known as wallet42 |
15:36:17 | hearn: | adam3us: all forks require flag days, including soft forks. |
15:38:36 | Luke-Jr: | hearn: softforks can autodetect their own "flag day" though |
15:38:56 | hearn: | how do you mean? |
15:40:16 | Luke-Jr: | hearn: softforks only require miner action, so can measure adoption by block % |
15:40:30 | Luke-Jr: | and switch when it gets to 95% or whatever |
15:41:22 | Luke-Jr: | hardforks require action by all users, mining or not, so block % is not really useful to determine adoption |
15:41:48 | Luke-Jr: | so a human has to decide a date that hopefully everyone will have updated by |
16:12:54 | hearn: | Luke-Jr: they require action by all users too |
16:13:04 | hearn: | Luke-Jr: if you don't upgrade, you're not really running a full node anymore, right? |
16:13:11 | hearn: | and presumably, you were doing that for a reason .... |
16:13:32 | hearn: | i mean they require less work if you assume that lots of people running full nodes are wasting resources and don't really care about checking all the rules |
16:13:55 | hearn: | but i don't think that's a safe assumption at all. this is why i see soft forks and hard forks as equivalent, except soft forks have a nasty silent failure mode where you think your node is operating in one way, but it actually isn't |
16:14:49 | hearn: | it's better avoided, to be serious about security. |
16:22:57 | gmaxwell: | hearn: I don't think thats true. To start; you never can know what possible extra requirements some majority of miners might be silently imposing on the network. They may keep them secret from you. A soft fork can only reduce the valididy of things. So they only produce type I errors (you conclude a transaction can make it into the history when really it cannot), but not type II errors (you conc |
16:23:04 | gmaxwell: | lude a transaction cannot make it into the history when it can). If a soft fork is, for example, limited to reducing the set of acceptable scriptSigs based on some explicit flag in a scriptPubKey. Then you would not be using that flag unless you knew about the soft fork. |
16:25:13 | gmaxwell: | So while, sure, its not something to be taken lightly I really cannot agree with the equivilence you're drawing there. To the extent that a soft fork increases your risk over overestimating the survival of a transaction that same overestimate already can exist due to opacity in miner policy or even just network connectivity. (e.g. you might assume this block will stay in the chain, but really the |
16:25:20 | gmaxwell: | network is partitioned and it will not) |
16:27:09 | gmaxwell: | And you can't say either of those things about a hard fork. If a block violates the rules, it violates the rules. And no amount of miner additional policy or partitioning will make a block the violated the rules acceptable in the future. The only way for that to happen is a hard fork, and the only way for a hard fork to happen is for the users of bitcoin, in overwhelming number, to accept new so |
16:27:15 | gmaxwell: | ftware which increases the set of things which are valid. |
16:30:03 | hearn: | right, but trying to estimate if a transaction will survive is fundamental to accepting bitcoins. if a soft fork takes place and you are left behind, then this opens up a widely documented and easily abused way to exploit you, whereas it's otherwise very unlikely that you'll see a transaction make it into a block and then have it die permanently later |
16:30:34 | gmaxwell: | In terms of flag days, the hard fork case's safty criterion is not mechnically measurable either... for soft forks the change will not itself break consensus with high probablity so long as a majority of miners will enforce it... so it can be auto triggered based on the blockchain. |
16:31:30 | hearn: | yes, sure, the triggering mechanism is orthogonal. a hard fork could be triggered by measuring miner adoption too with an N month delay afterwards, like p2sh. the only question really is whether old nodes stop functioning entirely/go into some kind of safe mode, or whether they keep serving API requests and blocks to clients that are no longer considered valid by the majority |
16:31:36 | gmaxwell: | hearn: it's also very unlikely that you'll see a transaction make it into a block and die later. With the more modern (post bip16) procedure for a soft fork, it takes a improbably broken or explicitly malicious (and likely byzantine) miner to produce such a block. |
16:32:33 | gmaxwell: | hearn: "a hard fork could be triggered by measuring miner adoption" thats not measuring the right criteria in that case. Miners are 0.0001% of the parties that need to upgrade for the hard fork, and arguably the least important of them. (they just stop being miners if they produce a block inconsistent with the hardfork) |
16:33:52 | gmaxwell: | hearn: also multiple blocks per day appear and fail to make it into the longest chain. So blocks that don't survive are already common, and inherently so. |
16:34:49 | hearn: | or a miner that just hasn't upgraded? yes, but if the rules of bitcoin form the social contract between its users then hard vs soft doesn't make much difference - measuring miner participation is just a convenient way to decide on the flag moment for a rule change. some kind of flag day is required either way, for people who want to check all the rules |
16:34:57 | gmaxwell: | So if you are betting on a block will not be conflicted, you'll already have your expectation violated constantly, no soft fork at all. As soft fork would make that bet even worse, but only iff there is a broken/byzantine miner. |
16:35:08 | gmaxwell: | hearn: not upgrading will not cause the miner to produce an invalid block. |
16:35:48 | gmaxwell: | hearn: thats what I mean by more modern (post BIP16) soft-forks. We won't do a softfork that results in rejecting a transaction that a non-upgraded miner would have added to a block itself. |
16:36:53 | hearn: | right, you mean because of the change to make the OP_NOPS considered non-standard |
16:36:59 | gmaxwell: | so the only way a non-upgraded miner loses a block due to a soft-fork is because they extended a chain from a byzantine miner. |
16:37:41 | gmaxwell: | hearn: that wasn't a change. There has not been a released version of Bitcoin since 2010 that had them as standard. (and not just the OP_NOPs, also version fields on transactions) |
16:38:20 | gmaxwell: | hearn: (it had temporarily been droped in git, but the NOPs were restored; specifically because of this reason) |
16:40:36 | hearn: | yeah, ok. still, the question remains - what does it mean to run a full node? to me it has always meant checking all the rules, or at least all the rules that are possible to check. it seems like that's one of the guarantees that a full node tries to give its users. that's why i worry about soft forks, it's undermining that guarantee. i agree that soft forks are less likely to cause problems in practice these days, if done carefully. |
16:40:47 | hearn: | i guess my concern is partly philosophical |
16:41:00 | gmaxwell: | Don't get me wrong, I am not discounting the existance of byzantine miners (there are clearly unethical people with control of large amounts of hashrate at times); it's just that the 1-confirm case is already iffy due to orphaning that must happen already due to non-synchrnonicity of the network... That you need to have a byzantine miner to start the surprise invalid fork keeps the risk in the sa |
16:41:06 | gmaxwell: | me kind of magnitude. |
16:42:30 | gmaxwell: | hearn: Well, as I said before; if you have the _strongest_ version of that requirement then you are doomed from the start, because whatever rules the miners will enforce is inherently opaque to you. (worse, it's not even the miners in the past that matter: its miners in the future that matter). So I think that that goal of exactly all the rules all the time is perhaps a bit too agressive. |
16:43:29 | hearn: | yeah. in my ideal world miners would all somehow remotely attest what they're doing so the system is entirely predictable :) but that's not practical |
16:44:11 | gmaxwell: | From that philosophical perspective we would have been best off if bitcoin had been released complete and perfect and never changed at all, and if you altered even one rule then it wasn't bitcoin, it was something else. ... this is elegant with respect to people never having a change imposed on them against their will... sadly, thats not realistically within the realm of human engineering capabil |
16:44:17 | gmaxwell: | ity. |
16:44:57 | gmaxwell: | hearn: well sadly miner behavior can be hidden by prefiltering the communications to miners, so no amount of cryptographic (or TPM) remote attest could actually give that result. |
16:45:27 | gmaxwell: | (you'd have to have a consensus of their inputs, and thus we've just made the problem recursive.. we need a blockchain to decide what inputs the miners got. Yo dawg.) |
16:45:39 | hearn: | right |
16:46:21 | hearn: | though i guess if the p2p network had authentication+encryption, a miner could attest what it was connected to and if you knew those nodes were also running the same rules as you, you'd know there was no prefiltering. but now we're getting into star-trek bitcoin territory |
16:48:42 | gmaxwell: | well this is #bitcoin-wizards, I have no fear of entertaining what star-trek bitcoin could do. I'm not sure if even thats enough, since e.g. someone sends a input the miner wants to filter, you kill the connection. I suppose if the miner were implemented using indistinguishability obfscuation so that even its operator couldn't learn what inputs it was given that would be enough. (but requiring In |
16:48:48 | gmaxwell: | d-obf is double insane, not just star-trek. And andytoshi seems to have a proof that ind-obf requires trusted setup fundimentally, IIRC) |
16:49:57 | hearn: | TC allows you to create sealed worlds so you could set up an encrypted connection that the owner of the mining node couldn't actually read. but obtaining a machine that can do that is a PITA, i've been looking at it lately and none of the big server manufacturers want to document precisely what their boxes have and can do :( |
16:50:17 | gmaxwell: | maybe if a miner attested to what it was connected to, and got each of those peers to blindly sign the block after creating it, thus proving they were still connected and dropped no inputs... but they could be dos attacked by peers that go up and down. I think. |
16:50:50 | gmaxwell: | yea TPM, if you accept the tpm security tradeoff, is equal to Ind-Obf but it's pratical. |
16:52:08 | gmaxwell: | so okay, sure TPM miners who have state secret from their operators. who have encrypted connections to other such miners.. and so on. Security reduces to how strong the tpm strong box is and how much you can trust its creator to not make glass-walled versions of it for select parties. |
16:52:17 | hearn: | the biggest problem with getting a TPM setup up and running is quite a few TPM manufacturers don't publish their certificates, or claim they do but actually don't. |
16:52:45 | hearn: | and even then it's hard to find out who manufacturs a tpm for any given server model without (presumably) phoning up the sales staff and interrogating them |
16:53:08 | hearn: | i might try anyway at some point this year as it opens up quite a few useful things, but maybe not. |
16:53:36 | gmaxwell: | yes, well are any except IBM actually doing both: (1) advertising remote attest as a supported thing, and (2) actually protecting the memory so that the protection is worthwhile against logic probes and cold boot? Last I'd looked (which was a long while ago) intel failed (2). |
16:54:41 | hearn: | cold boot is supposed to be fixed. RAM is wiped at reboot in TXT mode. RAM encryption is possible but AFAIK the only implementation is now owned by Facebook, as they bought the company that made it |
16:54:59 | hearn: | i'm hoping at some point they'll open source it. emailed the guy who did the company but didn't get a response yet. |
16:55:16 | hearn: | intel are working on something called SGX that is supposed to solve all the problems TC has. the design looks good but it's taking years for them to ship anything |
16:57:31 | hearn: | that said attaching logic probes to a modern memory bus is quite non-trivial. the xbox hackers who used that technique had to do weird things to slow the bus and cpu down massively, not sure it applies to intel/amd chips |
16:59:09 | gmaxwell: | wrt wipe, you can actually hot pull the ram. I've seen this demonstrated. There is some challenge because apparently many current boards implement 'scrambling' (in this case, xoring with a constant pad) so moving to another board make isn't always reliable. Keep in mind the threat model with the miners the attacker has physical access. |
16:59:36 | hearn: | yeah, physical access is the hardest threat model |
16:59:49 | hearn: | all you can do is keep raising the bar, really. |
17:00:02 | hearn: | SGX is entirely on-die though. it assumes everything outside the CPU core is compromised. |
17:00:05 | hearn: | quite impressive, really |
17:00:12 | hearn: | or will be, if/when it ships |
17:00:33 | hearn: | they've also done a lot of work on allowing upgrade of the software without losing access to its sealed secrets, and on making remote attestation actually work |
17:03:18 | gmaxwell: | hearn: xbox hackers were somewhat limited by costs. one can acquire a multiple-ghz DSO, they're just expensive (I'd looked into renting one for testing libsecp256k1 against sidechannel attacks) :) but indeed. Well, thats the attraction of Ind-OBF because it gives you the same behavior through cryptography, but its far from realistic yet. |
17:04:04 | gmaxwell: | IBM cryptocards are a single sealed package... so I think they're the only thing existing (At least the only thing I know about) that really gives a particularly strong story for remote attest against an attacker with unfettered physical access. |
17:04:18 | hearn: | on a slightly different tack, did you see the geppetto paper? very impressive leap forward beyond what Ben-Sasson et al have been doing. and they say they'll actually open source it (though i'm getting more skeptical about hearing this from academic research groups) |
17:04:48 | hearn: | yes the IBM cards are cool, but seem to be about as easy to find as unicorn poop :) |
17:05:09 | gmaxwell: | hearn: so actually I have sha512 and sha256 proofs running and can now probably do a ZKCP for a large sodoku with a few more days work. |
17:05:27 | hearn: | with what? geppetto? |
17:06:03 | gmaxwell: | hearn: If you can find the devkit, I have three of the IBM 4764s (PCI-X version, with PPC cpus in it). Can't get IBM to respond to me. :-/ |
17:06:22 | gmaxwell: | hearn: no with some layer on top of libsnark that someone recently published! |
17:06:34 | hearn: | oh, awesome! that's very cool |
17:06:46 | hearn: | ah is that the code published by the microsoft guys? |
17:06:53 | gmaxwell: | hearn: https://github.com/jancarlsson/snarkfront |
17:07:04 | hearn: | ah ha, i hadn't seen that, thanks |
17:07:28 | gmaxwell: | needs a very recent GCC due to C++11 bleeding edge stuff in it. |
17:07:42 | gmaxwell: | e.g. 4.7 was no adequate. 4.8 worked for me. |
17:07:50 | gmaxwell: | s/no/not/ |
17:10:11 | gmaxwell: | hearn: I've been slowly trolling hardware surplus (hurray living in silicon valley) and have been able to pick up 3 seemingly working IBM 4764's for basically nothing... just cannot obtain the software now. Though I've considered this pretty low priority, too many other things in the air. Now that blockstream exists I can probably contact their sales, as a company, and maybe make some progress. |
17:10:17 | gmaxwell: | But bleh, dealing with sales. |
17:10:52 | hearn: | thanks for the offer, i'll bear that in mind if doing stuff with TC gets to the top of my todo list :) |
17:10:59 | hearn: | i wonder if Hal used to have the dev kit? |
17:11:04 | hearn: | from the RPOW days |
17:12:10 | hearn: | seems an IBM card or SGX setup would be ideal for initialising the zkp key pairs |
17:12:23 | gmaxwell: | I tried contacting hal about that ... alas, he didn't get back to me. But it was somewhat after the last communication I'd seen from him anywhere, so he may not have been able. Hal had the prior version that was x86 based, so his software would likely not have been useful; he might have had a useful contact though. |
17:12:58 | gmaxwell: | hearn: yea, thats one of several things I'd like to use them for. |
17:13:35 | hearn: | perhaps his wife has an index of his stuff. |
17:13:48 | hearn: | huh wow jan carlsson rewrote libsnark from scratch! |
17:14:28 | hearn: | geppetto, if it's open sourced, would probably be easier than this though. they have a full blown backend to clang that compiles C down to "MultiQAPs" |
17:14:57 | hearn: | there's a very compelling bit of example code at the end that shows how to use it efficiently. the api seems quite straightforward. they got something crazy, like 8 orders of magnitude improvement in proving time |
17:17:25 | Aquent1: | Aquent1 is now known as Aquent |
17:20:42 | gmaxwell: | hearn: well it's not quite from scratch. But it reimplements all the circuit template stuff.. uses the crypto backends in libsnark. It's impressive in any case. |
17:27:05 | hearn: | gmaxwell: any idea why he claims the verification takes 8 seconds for sha2? i thought it was supposed to be measured in milliseconds to verify a pcp proof |
17:57:18 | gmaxwell: | hearn: he means the proof for a verification of sha2. :) |
17:57:36 | hearn: | ah i see |
18:47:12 | gmaxwell: | [OT] The SP20 miners from spondoolies have a much 'nicer' noise than the SP10. Still not quiet with the fans cranked up but much lower pitch. |
22:04:40 | Dizzle__: | Dizzle__ is now known as Dizzle |
23:18:28 | andytoshi: | gmaxwell: i have an argument that snarks require trusted setup (but my personal feeling about it is that we are just defining "zero-knowledge" in an overly platonic way) (have not had the spare brain cycles in months to consider this) |
23:19:04 | andytoshi: | ind-obf i believe requires multilinear maps, which are currently only possible (and "approximately so") by graded encodings, which iirc involve a trusted setup, but i feel this is accidental |
23:22:44 | andytoshi: | "requires multilinear maps" what i mean by this is that matiasevich's theorem (computable sets == diophantine sets) suggests an intuition "obfuscated general circuits is as hard as secure addition + multiplication", and the only framework we have for "secure ring operations" is multilinear maps. but the optimist in me thinks the next 5-10 maps will see some alternative to mlinear maps, something |
23:22:47 | andytoshi: | quantum hard and which doesn't have any trusted parties |
23:25:20 | gmaxwell: | years not maps. yea, I've seen some of the multilinear maps systems and they're really pretty ugly. |
23:25:51 | andytoshi: | yes, years :) |
23:26:31 | gmaxwell: | "oops sorry, your exection became too noisy and your ciphertext is not decodable, try again maybe?" |
23:26:34 | andytoshi: | i'm also kinda doubtful about the security of these graded encoding schemes, the early versions are broken now by what appear to be really ad-hoc attacks |
23:27:07 | andytoshi: | (but what do i know, i am way behind on my graded encoding reading) |
23:29:07 | gmaxwell: | if only these systems were as pretty as bilinear maps, which are basically a realization of exactly what you would have wanted from an idealized construct. |
23:29:53 | gmaxwell: | except, I suppose, in that they aren't quantum hard. |
23:29:59 | andytoshi: | yeah. it's a little surprising to me that there is no such thing, given how exhaustive our understanding of groups is these days.. |
23:30:16 | andytoshi: | mm, yeah, that's too bad. i really like DL for its simplicity :/ |
23:30:50 | gmaxwell: | well nothing like it will be quantum hard. |
23:32:34 | gmaxwell: | (I believe schor can be made to work for any finite cyclic group) |
23:35:17 | andytoshi: | that's also my recollection, tho to my shame i have not actually read his paper |
23:58:06 | gmaxwell: | This is pretty cool: http://www.newae.com/ sort of a prefab tool kit for sidechannel analysis and glitching attacks. (mostly targeted at small microcontrollers/smartcards) |