00:00:42 | Eliel: | bramm: root? that's of course defined in when the coins are moved to the sidechain. |
00:00:47 | kanzure: | bramm: have you read https://download.wpsoftware.net/bitcoin/alts.pdf or https://download.wpsoftware.net/bitcoin/pos.pdf or https://download.wpsoftware.net/bitcoin/asic-faq.pdf ? |
00:01:20 | rusty: | bramm: the suggestion was there would be a some calculation limiting amount of bitcoins which could be transferred from any particular sidechain to the number which were moved there. This contains any damage. |
00:01:30 | bramm: | From the side chains whitepaper: "Such a proof may be invalidated by another proof demonstrating the existence |
00:01:30 | bramm: | of a chain with more work which does not include the block which created the output." |
00:01:59 | gmaxwell: | bramm: sure, but this has nothing to do with block validity. It's still internal to the transaction entirely. |
00:02:08 | gmaxwell: | bramm: Work on the sidechain. To spend a sidechain coin your signature shows a spend in the sidechain, and then some (per-sidechain parameter) amount of additional work on top of it. |
00:02:57 | bramm: | kanzure, the ASIC faq says some things which are just plain wrong |
00:03:00 | gmaxwell: | bramm: The signature can only sign for transactions which produce txouts which are CHECKLOCKTIMEVERIFY (or analogous). |
00:03:20 | op_null: | bramm: what do you think in it is wrong? |
00:03:23 | kanzure: | so... yes? |
00:03:46 | bramm: | Q: Is ASIC resistance desirable? A: No |
00:03:54 | gmaxwell: | ... |
00:04:10 | kanzure: | are we sure this is bram |
00:04:26 | op_null: | bramm: can you provide a counter for that? |
00:04:57 | kanzure: | quick, someone come up with an obscure dht question |
00:05:34 | fenn: | kanzure: unlikely a troll would go to such lengths to get -wizards to review his proof of work scheme posted on bramcohen.com |
00:05:45 | kanzure: | excellent point |
00:05:49 | kanzure: | carry on |
00:06:43 | gmaxwell: | bramm: asic faq spends quite a few pages explaining it's argument as to what 'asic resistance' is not desirable. If _thats_ your example of something 'just plain wrong', then I really recommend you give the page a serious read. |
00:07:18 | midnightmagic: | That's not necessarily correct, as the discussion, the twitter back-and-forths, and related info has been noticed and commented-on by some well-known troll types who are capable of manufacturing reasonably consistent technical argumentation. |
00:07:33 | bramm: | gmaxwell, I'm still not following what these proofs are, it seems to be that the main thing you need to release a coin from a side chain is a proof of a bunch of work, and then a ... allegation? of the release happening |
00:07:34 | midnightmagic: | no offence, bram. |
00:08:16 | bramm: | gmaxwell, Reading through the argument, it seems to be saying that ASIC manufacturers spend more money on power than on hardware? |
00:08:26 | bramm: | ASIC miners I mean |
00:08:28 | gmaxwell: | bramm: they do. |
00:08:54 | gmaxwell: | (was also true of gpus, for that matter) |
00:08:57 | bramm: | Could have just said that up front rather than forcing me to read through pages of argumentation :-P |
00:09:50 | bramm: | Anyway, that's an interesting point. The question then becomes what is already well optimized on regular CPUs that the ASICs can't do it with less power |
00:10:21 | gmaxwell: | Useful feedback, I suppose. The page makes a number of other important points about criteria we've reconized as being important for good POW schemes, e.g. progress freeness, approximation freeness, optimization freeness. |
00:10:22 | op_null: | the question is more if that is desirable. |
00:10:23 | bramm: | Or alternatively can you come up with a scheme which uses such mass quantities of memory that the costs are primarily memory depreciation rather than power? |
00:11:04 | bramm: | Both of which are very different takeaways than 'ASIC resistance is futile' |
00:11:08 | kanzure: | why is it important to you to have asic resistance? |
00:11:23 | bramm: | to avoid centralized control |
00:11:45 | bramm: | And to avoid general waste of resources. Its not like all this manufacturing and CPU burning is contributing to society in any meaningful way. |
00:11:47 | kanzure: | non-asic centralization is impossible? |
00:11:54 | kanzure: | wait, what's waste here? |
00:11:55 | gmaxwell: | bramm: in the context of cryptocurrency, the answer may be not much. Since we're in (in theory) perfect competition, even only a small factor efficiency difference puts everyone else out of business. It's hard to imagine some instruction sequence that couldn't at least be made 10x more efficient on specialized hardware ... you can likely get that much just from eliminating peripheral and IO costs. |
00:12:22 | op_null: | without ASICs we would still be owned by botnets. the Skynet botnet thrashed the Bitcoin network from what I've read of it's size. |
00:12:35 | gmaxwell: | Though, e.g. for a password hardening KDF getting an attacker down to operting only 10x more efficient than the defender would be pretty great. |
00:12:37 | kanzure: | there are many who argue that proof of work is not wasteful because of the purpose it is being put to |
00:12:51 | bramm: | Saying 'other things could kill us, we should throw in the towel' isn't a particularly useful argument |
00:13:06 | bramm: | kanzure, I'm not even going to address that argument |
00:13:15 | kanzure: | hmm. |
00:13:19 | op_null: | I'm saying we shouldn't throw away the best security we have. |
00:14:22 | kanzure: | bramm: so the concern is that the costs of a decentralized system are greater than the costs of running a centralized ledger..? |
00:14:27 | bramm: | op_null, I don't think we have to worry about doing such a great job of asic resistance that the asics have no advantage at all, the bigger problem right now is the asics running everything |
00:14:31 | op_null: | you can have your dreams of a grand hyper decentralised network, but if we can be owned by a botnet the dream is ruined out of the gate. |
00:14:59 | bramm: | kanzure, I didn't suggest fixing things by switching to a centralized ledger |
00:15:01 | phantomcircuit: | botnet operators loved the X11 algo |
00:15:18 | op_null: | phantomcircuit: and the Monero one. |
00:15:23 | kanzure: | bramm: if your concern was waste and it's not worth addressing waste, then i was left assuming you would choose waste-minimizing solutions.. |
00:15:39 | instagibbs: | more to the point, look at what gmaxwell said: "Since we're in (in theory) perfect competition, even only a small factor efficiency difference puts everyone else out of business." it's a temporary win |
00:15:54 | instagibbs: | and the more complex the ASIC is in the end, the higher the barrier to manufacture |
00:16:06 | phantomcircuit: | gmaxwell, we're at the point were it makes sense to run hw, but not to buy more |
00:16:33 | bramm: | kanzure, there is a very real question as to what value bit coin is providing at all, but for the sake of research it's best to assume that its hyper-decentralization is the thing of value and that concept needs to be worked within |
00:17:06 | instagibbs: | censorship resistant ledger, imo. |
00:17:09 | bramm: | I mean, if you take away hyper-decentralization then there isn't anything interesting in bit coin protocol at all. |
00:17:29 | Alanius: | if hyper decentralization is the important thing, why worry about waste? |
00:17:51 | kanzure: | a definition of what is and is not waste would be useful here |
00:18:13 | bramm: | Alanius, because you can keep the hyper decentralization while reducing the amount of waste |
00:18:21 | instagibbs: | bramm: how |
00:18:23 | bramm: | burnt CPU = waste |
00:18:30 | gmaxwell: | I'm confused about this 'waste' stuff. |
00:18:49 | gmaxwell: | Becuase it's taking as given that there is non-trivial waste to begin with; and that is not at all obvious to me. |
00:19:04 | bramm: | This is my new painting. It is worth $100 million. I call it DEADBEEFDEADBEEFDEADBEEF |
00:19:18 | kanzure: | is that your fingerprint? |
00:19:31 | fenn: | the waste is electricity turned to heat |
00:19:45 | bramm: | Hey zooko |
00:19:50 | kanzure: | (i am reminded of 0xFEEDBEEF) http://pgp.surfnet.nl:11371/pks/lookup?op=vindex&fingerprint=on&search=0xFEEDBEEF |
00:19:56 | gmaxwell: | thats a pretty flawed understanding of economics there. When I make the same argument I talk about a smashed hope diamond which has been urinated on by the pope; ... very rare item. So obviously valuable. :P |
00:20:07 | Alanius: | Isn't waste subjective, i.e., what's waste for one person might not be waste for another |
00:20:12 | zooko: | Hi there, bramm! |
00:20:24 | phantomcircuit: | gmaxwell, iono i could see certain people paying big money for that |
00:20:29 | zooko: | Heh heh. |
00:20:38 | bramm: | instagibbs, One possible improvement would be to make mining costs be based on memory depreciation rather than CPU usage |
00:20:41 | gmaxwell: | But none of that is an understanding of Bitcoin. Bitcoins aren't produced from work, they're just handed out by the system. The work in bitcoin exists to provide security, and there no spurrious economic argument exists. |
00:21:29 | bramm: | There are some other crazy techniques for reducing waste but I'm keeping mum about them for now. |
00:21:52 | op_null: | I hope they're not "useful" PoW sort of techniques. |
00:22:05 | gmaxwell: | bramm: if I may be so bold, based on your ideas posted so far I think they're likely to be long since considered, found wanting, and discarded. :) |
00:22:07 | bramm: | op_null, oh hell no, much saner and more interesting than that |
00:22:47 | gmaxwell: | and if so, keeping them secret just prolongs your ignorance, not anyone elses. :) |
00:23:15 | bramm: | gmaxwell, No, my most interesting idea is based on a primitive which I had to improve considerable on the state of the art, for a problem where there's actual published academic work about it |
00:24:04 | kanzure: | so about that definition of waste... |
00:24:40 | Alanius: | < bramm> burnt CPU = waste |
00:25:07 | Alanius: | so if I understand correctly, bramm is proposing to burn RAM instead of CPUs |
00:25:08 | bramm: | Also all of your arguments against anything I'm saying apply equally against cuckoo, maybe you should take it up with tromp |
00:25:26 | gmaxwell: | bramm: That kind of statement is also true about e.g. say the proposals that replace proof of work with timelock encryption ticking; or about the proposals related to efficient asymetrically verifyable disk storage... or proposals related to efficient relaying using network coding. |
00:25:37 | op_null: | bramm: we have, in length. |
00:26:15 | gmaxwell: | Tromp understand them. There just doesn't exist a clear paper arguing about memory hardness being useful or not. If it is, tromps work is interesting. |
00:26:52 | bramm: | Is there any written proposal about time lock encryption ticking? |
00:27:03 | gmaxwell: | (or well, more likely to be interesting that most other proposals. E.g. it doesn't have the trivial TMTO that most attempts have; I'm not sure if you missed it: But I believe I pointed out that your own proposal has a trivial time memory tradeoff) |
00:27:29 | bramm: | what time memory tradeoff? To which of my two proposals? |
00:28:49 | gmaxwell: | bramm: on timelock... A couple, one handy to me right now its that it's mentioned in passing on https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas (though a construction based on ideal lattices seems like it works better) |
00:31:32 | gmaxwell: | bramm: in your first proposal say instead of computing a great many values to find a collision I just compute two, using ~no storage. and grind the outer function. This is a time memory tradeoff (in the other direction-- I abandon memory hardness but do more computation). There are in between values, for example, only remember values that begin with 0xdeadbeef. (distinguished points, to use the pollard-rho nomenclature) to get any ... |
00:31:38 | gmaxwell: | ... degree of incremental memory usage that I want. Oh also, you can avoid the n*log(n) sort in that model too simply by using a hashtable and evicting (though ... in hardware I dunno if the muxing for random access is really cheaper than a structured sort) |
00:31:51 | Myagui: | Myagui is now known as bilIotronic |
00:32:02 | bilIotronic: | bilIotronic is now known as davidIatapie |
00:32:12 | davidIatapie: | davidIatapie is now known as Myagui |
00:32:37 | bramm: | gmaxwell, I mentioned that tradeoff in my post, it results in n*n instead of n*log(n) computation, not exactly appealing |
00:32:54 | Myagui: | Myagui is now known as MatyIda |
00:33:21 | bramm: | Also using a hash table like that is basically doing a sort, it's hiding the log(n) under the rug by assuming that it's a constant operation |
00:33:33 | MatyIda: | MatyIda is now known as Myagui |
00:33:37 | bramm: | And it's a disaster in practice because of all the random memory accesses |
00:34:01 | gmaxwell: | bramm: a distinction there is that you only need to find one. so you may terminate early. But sure. |
00:35:35 | bramm: | When I post about these things publicly I generally get responses along the lines of 'Have you heard of script? It's awesome!' hence my dumb questions |
00:35:39 | gmaxwell: | Yes, asytomptic difference between the fully memoized version and the memoryless is a sqrt(). But you suggest a 'modest constant', so the concrete difference in efficiency is something that can be evaluated. |
00:36:24 | bramm: | Yeah going for a factor of 2 or 4 with memory/cpu tradeoff might work |
00:40:17 | bramm: | gmaxwell, I'm not following the usage of the time lock you propose on your alt ideas. What utility is there in people making encryptions which can be decrypted at some fixed point in the future? Seems rather unrelated to proofs of work |
00:40:38 | gmaxwell: | The other thing that memory hard functions run into that tromp's also suffers from is a lack of progress freeness. Though he thinks he's picked parmaters where they're okay. You observe something kind of close to this problem in your post with giving an upper limit. |
00:42:52 | gmaxwell: | (the main motivation when we talk about progress freeness is that an algorithim which has progress gives an advantage to faster participants over independantly operating ones, e.g. a centeralization advantage) |
00:44:03 | bramm: | Yes that is an issue. It has more to do with the usage of the function than the function itself though. |
00:44:13 | gmaxwell: | right. |
00:45:02 | fenn: | bramm: proof of work is measured in CPU/ASIC cycles which reduces to energy, but in timelock-POW the scarce resource is time (theoretically; i haven't read the timelock papers) |
00:45:18 | bramm: | fenn, I'm way ahead of you on this one :-) |
00:46:01 | zooko: | bramm: why is your idea better than tromp's cuckoopow? Feel free to answer by link to previous writing. |
00:46:15 | zooko: | bramm: and also, are you going to be patenting your thing? |
00:46:30 | gmaxwell: | bramm: It has many utilities in the cryptocurrency space, e.g. avoiding miners censoring transactions by getting encrypted versions mined... the motivation there is dual-using the work without compromising (maybe) other useful pow properties, and doing so with something that can't be delivered otherwise... But more specifically related to consensus itself, with functioning timelock encryption you can do things like fairly elect ... |
00:46:37 | gmaxwell: | ... participants. but the motivation there was the dual use. |
00:47:04 | bramm: | zooko: I haven't read through cuckoopow, that may well be better (I'm expecting it is, only heard about it a few hours ago) |
00:47:18 | bramm: | zooko, I'm not doing any patenting on this stuff |
00:47:30 | gmaxwell: | fenn: well time is not a good material for consensus; relativity precludes the universality of time, as lamport first observed. :) |
00:47:49 | fenn: | yeah i can't see how timelock could possibly work |
00:48:17 | op_null: | have you looked at peter todds timelock? |
00:48:23 | gmaxwell: | fenn: it's a computational ticking (you've seen the old rivest timelock puzzles paper?) |
00:48:44 | fenn: | both already on my reading list, which is far too long |
00:49:35 | gmaxwell: | fenn: in any case, since time is not really something universal, you can still impose universality onto it if you have a consensus.. e.g. instead of asking what time can do for consensus, ask what consensus can do for time; and you get this old writeup: http://people.xiph.org/~greg/decentralized-time.txt |
00:51:07 | zooko: | bramm: why are you not-patenting any of this stuff? Is this a project you intend to profit from? |
00:51:20 | zooko: | bramm: disclosure: I'm intending to profit from a cryptocurrency project right now. |
00:51:29 | zooko: | I mean I'm right now intending. Not I'm intending to profit right now. ☺ |
00:52:30 | bramm: | Somehow I don't think trying to extract value from a system designed to not be taken down by attacking it with patents would appear to be unlikely to work |
00:53:07 | bramm: | gmaxwell, Having transactions be introduced encrypted is an interesting idea, but I think it can be done without time lock |
00:53:08 | zooko: | *nod* |
00:53:27 | bramm: | Sorry about the number of negatives in that last comment of mine |
00:53:56 | zooko: | bramm: adam3us has proposed encrypted transactions to prevent miners from discriminating among them. |
00:54:36 | gmaxwell: | bramm: you'd think. It's proven to be particularly resistant. Adam has written a fair amount (search term: comitted transactions). A problem arises where someone can't censor the _transaction_ but they can censor the key reveal, which is pretty much as good as censoring the transaction. Functional timelock encryption would solve that. |
00:55:26 | zooko: | Another problem with it is that it doesn't prevent the recipient from discriminating among payment-histories. |
00:56:19 | bramm: | Here's a paper which may be of interest: https://eprint.iacr.org/2011/553.pdf |
00:56:44 | tdlfbx: | I've been thinking of using the blockchain as a clock source... |
00:57:05 | kanzure: | bramm: here are some things you may be interested in http://diyhpl.us/~bryan/papers2/bitcoin/ and less relevant but whatever http://diyhpl.us/~bryan/papers2/security/ |
00:57:19 | tdlfbx: | Just compare clocks upon any inta-node communication. Don't accept anything if the clock is too far off. This would incentivize everyone to agree on clocks. |
00:57:40 | tdlfbx: | Randomize who revealize their clock first... |
00:57:59 | tdlfbx: | e.g. via D-H secret % 21. |
00:58:05 | tdlfbx: | err. D-H secret % 2. |
00:58:56 | tdlfbx: | *reveals. I'm going to go take a tiping class new. |
00:59:10 | bramm: | bit coin already has significant interaction with clocks |
00:59:24 | op_null: | not really. |
01:00:04 | bramm: | Well, clock interactions are minimized, but they're there, specifically it won't accept transactions too far in the future |
01:00:27 | op_null: | blocks, you mean. transactions don't have timestamps for obvious reasons. |
01:00:27 | tdlfbx: | But it's very, very lenient. |
01:00:46 | bramm: | Yes I meant blocks |
01:01:04 | tdlfbx: | op_null: what are the "obvious reasons" for not putting timestamps on transactions? |
01:01:12 | gmaxwell: | bramm: ah, wrt sequential functions you may find https://bitcointalk.org/index.php?topic=310323.0 interesting (the goal there is different but what it wants at its heart is a sequential function with trapdoor verification; the rivest timelock puzzles come up in that thread too) |
01:01:25 | op_null: | tdlfbx: you can't prove anybody is telling the truth. what would you gain from it? |
01:01:47 | tdlfbx: | You always compare to your own clock, and reject anything that is too far outside it. |
01:01:54 | op_null: | if we had some way of proving transaction timestamps were right without trust, we could just forget the whole block chain entirely. |
01:02:12 | bramm: | gmaxwell, Note that time lock puzzles and proofs of sequential work are rather different animals |
01:03:20 | op_null: | tdlfbx: how would that help? that just means you can't sign transactions for some time in the future. |
01:03:24 | tdlfbx: | Simply rejecting anything with an odd timestamp incentivizes nodes to figure out their clock bullshit, without specifying how they should achieve that. |
01:03:51 | op_null: | it doesn't do that at all. |
01:04:32 | tdlfbx: | Why not? There are lots of clock sources with different attack vectors. Including simply taking the average from communicating nodes. |
01:05:02 | op_null: | why do I care if other people's transactions get rejected by my node? |
01:05:15 | tdlfbx: | If you want your node to agree with the longest chain you care. |
01:05:17 | op_null: | you're making a faulty assumption that all transactions are broadcast exactly when they are signed too. |
01:05:36 | op_null: | what, you want transaction timestamps to be a validity rule too? |
01:05:38 | tdlfbx: | They are, because if I don't receive them in a timely manner, I'd reject them. |
01:05:46 | tdlfbx: | Yes. |
01:05:56 | op_null: | ..no. |
01:06:08 | bramm: | You always accept all transactions which are referenced by the root you accept |
01:06:09 | bramm: | Any other policy would be insane |
01:06:39 | op_null: | assuming that block is valid of course. you don't accept invalid transactions, that would make the block itself invalid. |
01:07:52 | tdlfbx: | bramm: within a window of communications latency. |
01:08:09 | gmaxwell: | bramm: I know. just some related space there. Also in relation to sequential proof of work, are you aware of the compact spv proofs (appendix C of the sidechains whitepaper)? In it we show how to compress a proof of the total work in a blockchain to a proof log(blocks) size. This is also a kind of sequential proof of work (e.g. because the blocks themselves make progress they're inherently sequential). |
01:08:26 | bramm: | I'm not sure what the policies are around what transactions miners accept. The sanest thing would be to accept everything currently valid |
01:08:39 | op_null: | why is that sane? |
01:09:07 | tdlfbx: | gmaxwell: I really like that part. |
01:09:13 | bramm: | op_null, because you get to the commission on all of them :-) |
01:09:17 | tdlfbx: | Anyone plan on implementing it in their coin |
01:09:19 | tdlfbx: | ? |
01:10:13 | gmaxwell: | tdlfbx: it's just an additional commitment, like height in coinbase. Doesn't require a different system to implement, miners could be including it today in bitcoin. Fortunately the block headers are small enough that most applications don't care about the compression. |
01:10:15 | op_null: | bramm: not all valid transactions pay fees. not all valid transactions pay enough fees for their size. you might not want to include any transactions at all. you might not want to include some because you find their content unfriendly. |
01:10:31 | bramm: | op_null, fair enough |
01:11:26 | tdlfbx: | Hence encrypted transactions. I worry a lot about this one too, especially as "bitcoin 2.0" implementations like counterparty, mastercoin come online, there is an incentive to omit transactions that bet against the house. |
01:11:34 | bramm: | gmaxwell, hmm, yeah, might be that that trick applies to sequential work generally, I haven't read that part yet |
01:11:49 | op_null: | tdlfbx: I think that's a positive thing. |
01:12:34 | tdlfbx: | huh? If I'm the house, and I'm also mining, I would exclude transactions that bet in a way so as to decrease my profit. That's not a positive thing. |
01:13:18 | tdlfbx: | Where "house" is doing bid-ask matching, for instance, or operating a derivatives market |
01:13:35 | op_null: | my block my choice. if your system is that fragile you shouldn't have designed it that way. |
01:14:29 | bramm: | gmaxwell, I think you meant appendix B, not appendix C |
01:14:56 | tdlfbx: | Heh. You're speaking as a malfeasant derivatives market operator, and I'm speaking as a customer. Customers don't design such things. |
01:16:14 | op_null: | I didn't say anything about the customer. |
01:16:15 | bramm: | gmaxwell, And if I'm reading it right there's a remarkably similar thing sitting on my hard drive right now :-) |
01:17:06 | tdlfbx: | So here's how to make money by omitting transactions: operate a derivatives market. Also bet on the same market. Exclude transactions which bet in the same direction as you. |
01:17:29 | tdlfbx: | In bitcoin, a miner *can* choose to exclude transactions. |
01:17:48 | tdlfbx: | And that's a critical part in enabling malfeasance. |
01:17:54 | op_null: | why are you designing such a shitty system that relies on transactions being confirmed right away? if you make the assumption that all miners will include all transactions in every block you're going to be in a lot of pain. |
01:18:31 | bramm: | Usually a transaction will just have to wait for a later miner to accept it if there's one miner who's trying to suppress it |
01:19:03 | tdlfbx: | So include bid/ask offers in the blockchain, and wait for them to be confirmed? Might fix my concern...but god it's slow. |
01:19:52 | op_null: | why on earth are you wanting to do orderbook stuff on the block chain!? |
01:20:13 | bramm: | Yeah the block chain just has transactions in it. The utility of smart transactions is still highly questionable |
01:20:52 | bramm: | The only smart transaction which makes any sense to me at all is currency exchanges |
01:21:17 | gmaxwell: | bramm: oops yes. |
01:21:22 | gmaxwell: | Got the order of them wrong. |
01:22:38 | op_null: | bramm: what? |
01:23:06 | op_null: | other than cross chain swaps it's totally unusable for that. |
01:23:31 | bramm: | op_null, crypto currency exchanges :-) |
01:23:53 | op_null: | it's even of limited use for that. |
01:24:23 | tdlfbx: | op_null: so to avoid malfeasance by excluding transactions one has to include bid/ask offers in the blockchain, and have to confirm them. But you object to having them in the blockchain? Can't have it both ways. |
01:24:40 | op_null: | you almost always end up in a state where one party or the other can stall, and use the delay to hedge against the counterparty. |
01:25:14 | bramm: | gmaxwell, the question of an attacker using up everyone's peer connections is an interesting one. The best solution isn't to have a single way to accept connections, it's to reserve some number of connection slots to peers who do the best proof of work, some number who have the best IP match, some number who you've known the longest, etc. |
01:25:21 | op_null: | tdlfbx: I really have no idea what you're talking about. |
01:25:53 | bramm: | op_null, Yeah smart transactions suck |
01:26:04 | gmaxwell: | bramm: yes. Oh I'm glad to hear you validate that. Thats also my prefered solution. Here is a snazzy criteria which I'm not sure you've considered: Lowest minimum rtt time. It's trivially measured and impossible to forge. |
01:26:11 | gmaxwell: | (though it results in bad topology) |
01:27:41 | op_null: | bramm: I don't think I said that. |
01:28:06 | gmaxwell: | in any case, that storage proof stuff actually came out of discussions about which grab-bag soup of connection options to use. E.g. if you're not in one of the freeby categories, maybe you'd be willing to burn some disk space. |
01:28:48 | bramm: | Bitcoin has very real problems: mining centralization, the block chain not scaling. It doesn't have problems with needing smart transactions. It doesn't really even have problems with transactions not going through fast enough |
01:29:28 | bramm: | gmaxwell, Yes low ping time is the other one which I didn't remember off the top of my head |
01:29:29 | tdlfbx: | bramm: have you read the sidechains paper? 2 days to bail in/bail out of a sidechain? I'd say that's not fast enough. |
01:30:34 | tdlfbx: | op_null: I have not designed a market which I think works, using the blockchain. Just brainstorming. |
01:30:40 | gmaxwell: | tdlfbx: hm? have _you_ read it? :P see appendix c. The primary method to move and out isn't the 2wp. |
01:30:59 | tdlfbx: | Still waiting on the atomic implementation :-P |
01:31:39 | amiller: | tdlfbx, that's an awful comment; it doesn't have anything to do with bitcoin, and it's not inherent to sidechains either |
01:31:52 | gmaxwell: | No one really cares. (you can actually see there have been some swap transactions in the blockchain) |
01:31:59 | bramm: | gmaxwell, The protocol should be that normally you accept all new incoming connections and delete and existing one if you're over limit, but peers have the option to become persistent with proofs of work if they're having trouble connecting |
01:32:21 | tdlfbx: | @amiller 2 days is too slow is an awful comment? |
01:32:53 | bramm: | side chains are not a core part of bit coin. In fact currently they aren't a part of bit coin at all |
01:33:10 | op_null: | bramm: the problem is the sort of people who are going to be flooding you with connections have a lot more CPU power than you. this is the reason that hashcash failed to become a thing in the first place. |
01:33:21 | op_null: | ie, legit people have less CPU power than the non-legit ones. |
01:33:44 | bramm: | op_null, that's why you also reserve spots for best IP matches, closest peers, and longest-known peers |
01:34:02 | op_null: | IP match with what? |
01:34:18 | bramm: | With yourself, you hash together the IPs of the end points |
01:34:36 | op_null: | I don't see what that gains you. |
01:34:36 | gmaxwell: | tdlfbx: with respect to markets, you haven't seen the freimarkets paper? http://freico.in/docs/freimarkets-v0.0.1.pdf |
01:35:07 | op_null: | a botnet operator who wants to flood you can just pair nodes who "match" and then assign them specific zombies. |
01:35:13 | bramm: | That one has the nice property of producing very good interconnectedness. It also results in being resistant to attackers who don't have huge numbers of IP addresses. |
01:35:18 | gmaxwell: | yes, it's annoying botnets actually have a lot of cpu power relative to honest ordinary nodes. That was part of the rational for storage there. |
01:35:21 | tdlfbx: | @gmaxwell no. Thanks! Neat! |
01:36:22 | tdlfbx: | * tdlfbx wonders why Counterparty gets all the press on this topic...and what's wrong with it. |
01:36:26 | op_null: | bramm: if you are a criminal you have no problem purchasing botnets with hundreds of thousands, maybe millions of exit points. you can't make the assumption that your opponent doesn't have nearly unlimited IP addresses. |
01:36:43 | bramm: | op_null, that's why it's only one of several different techniques used |
01:36:44 | gmaxwell: | op_null: the idea is that you carve up your capacity into varrious chunks, and so even if an attacker can optimize for one kind of chunk they can only monopolize that particular set of connections. |
01:37:15 | op_null: | though really, Bitcoin vs. Botnet would end in 7500 hosts being flooded off the map. |
01:37:32 | op_null: | sausage through the eye of a needle, and all that. |
01:38:09 | bramm: | The Bitcoin peer protocol is not terribly well optimized for defending against botnets. We're talking about things it *could* do |
01:38:30 | gmaxwell: | Indeed. thats part of why better inbound hurestics haven't been a high priority compared to the scalablity improvements needed to just encourage more nodes to run. :) |
01:38:31 | op_null: | gmaxwell: yes, it's probably more optimal than what we currently use which is pretty dumb. |
01:38:44 | gmaxwell: | op_null: hey now, what we currently use has some nice properties. |
01:39:21 | gmaxwell: | It's very strong against short term attacks against already running nodes. So it's not totally dumb. |
01:39:49 | gmaxwell: | You could do worse. e.g. doing nothing else but randomly evicing when new connections come in over the limit would be much worse. |
01:40:35 | bramm: | IP addresses are stronger than you might think - if even 10% of your connections are to legit peers, you're fine |
01:40:38 | op_null: | gmaxwell: I don't see why a very small botnet couldn't exhaust all of the sockets on the network. requires almost zero bandwidth, requires people filling up their iptables ruleset with zombies, and is incredibly cheap. |
01:41:00 | bramm: | A bonnet would have to crowd out legit peers to a ridiculous degree to be successful |
01:41:03 | bramm: | botnet |
01:41:04 | gmaxwell: | we only need a single connection to the honest network. |
01:41:24 | bramm: | gmaxwell, You need at average of more than two connections for the network as a whole to function |
01:41:34 | gmaxwell: | op_null: because once you're already up the botnet cannot evict your existing connections. They can give a hard time to newly starting nodes, indeed. |
01:41:43 | op_null: | bramm: bitcoin peers have a reasonably high churn. as each node DCs it is replaced with a zombie socket. |
01:42:04 | gmaxwell: | op_null: the nodes that count the most do not have high churn. |
01:42:23 | gmaxwell: | (in particular it's important that miners not be partitioned from each other, and that merchants not be partitioned from miners) |
01:43:02 | gmaxwell: | I am not defending the idea that we couldn't do much better, doing better has been on the long term roadmap for some time. Just pointing out that it could be much worse too. :) |
01:43:06 | op_null: | gmaxwell: what version did the null vin bug get fixed? there's still a clear 10% of the network on 0.8.* |
01:43:08 | bramm: | Do bit coin peers view age of the peers they're talking to based on IP address or public key? |
01:43:23 | gmaxwell: | bramm: fair enough. |
01:43:46 | bramm: | Sorry about saying 'bit coin' all the time, by the way, xchat keeps autocorrecting to that |
01:44:32 | gmaxwell: | bramm: there isn't really any age used at all in bitcoin. We never disconnect peers when our connections fill. So we're at least minimally robust against a working topology being made non-working by a short term attack. |
01:45:14 | bramm: | gmaxwell, If a peer goes down and comes back they don't get credit for being old and get to displace someone newer? |
01:45:22 | op_null: | gmaxwell: if you want churn, a malicious actor with some sort of memory exhaustion nasty can cause that. if we assume everybody uses the defaults, there's a maximum of (6500*125) listening sockets on the network. if we assume maximums based on linux defaults we have (6500*1024) neither is huge. |
01:45:23 | gmaxwell: | bramm: there is no keying or authentication at all in the p2p protocol. The system generally assumes any peer is malicious, and doesn't do any identity tracking with peers. (for better or worse) |
01:45:38 | bramm: | Well that explains that |
01:46:10 | gmaxwell: | op_null: this is why we fix memory exhaustion attacks. :) |
01:46:20 | gmaxwell: | (more than any other reason, in fact!) |
01:46:57 | op_null: | if you make a wild assumption and say that a botnet zombie can make 125 outgoing connections, you'd only need to match listening nodes with zombies to totally take up the whole listening ability of the network. |
01:48:49 | op_null: | but yes, I realise that. |
01:49:01 | op_null: | long lived connections are gold. private peering is gold. |
01:49:07 | gmaxwell: | Yea, not sure what we're arguing about since I think we understand each other completely. |
01:49:36 | op_null: | I don't think we're arguing |
01:49:43 | gmaxwell: | oh okay. :) |
01:51:07 | op_null: | if I was coming across like that, totally didn't mean t |
01:55:36 | bramm: | gmaxwell, Trying to take a high level view of it, is it fair to say that the method of a coin getting released back from the side chain that there has to be a proof that an intention of release happened on the side chain followed by a certain amount of work in the side chain? |
01:56:41 | gmaxwell: | bramm: right. And the party that wants that coin released goes and gathers up that proof, and uses it as the signature of a bitcoin transaction. |
01:57:37 | c0rw1n: | c0rw1n is now known as c0rw|sleep |
01:58:24 | bramm: | And the assumption is that the parameters will be set properly when the coin is pegged that it will be impractical for an attacker to pull the transaction out without the benefit of the side chain's mining? |
01:59:04 | bramm: | And if there are multiple transactions for releasing the coin back whichever valid one gets accepted into the bit coin block chain first wins? |
02:00:41 | bramm: | What is the point of petty coin? |
02:00:54 | gmaxwell: | bramm: That, and somewhat more. It's a little more elaborate: When you release the coin it isn't released directly. It's released into a holding state (a time locked transaction), if nothing more happens then it becomes available some time later. In the interm the holding state has a secondary exit condition, that you can abort the transfer if you show with more sidechain work that the chain didn't happen. |
02:01:48 | bramm: | gmaxwell, I see. It would be a lot easier to understand if this concept was explained up front in the paper |
02:01:56 | gmaxwell: | And this works without changing the bitcoin consensus model. |
02:02:43 | bramm: | Or rather, it would be more understandable to me if it were were explained this way up front, because from that explanation it's obvious to me how to fill in all the details. |
02:03:21 | gmaxwell: | bramm: section 3.2 (see also the diagram). sorry, communication is hard. There are actually a lot of details, hitting the right level of abstraction is an art. |
02:06:08 | bramm: | I will grant you that it is in fact a deterministic problem to decide what to accept, although it adds a lot of stuff to the acceptance process, and the side chains are very shackled to how bit coin does things |
02:07:24 | bramm: | Also, the shortened proofs sketch me out. I've run numbers on those things, it takes a lot of samples to get a decent amount of assurance |
02:08:05 | gmaxwell: | It does limit how the sidechain can express its willingless to allow a coin to move. And also potentially what constutites 'work'. In the most abstract sense, e.g. if script were upgraded to run arbitrarily complex programs... then the only real constraint on the external system is that it be "compactly transcript verifyable", e.g. that you can extract a compact transacript from it to prove it authorized something. |
02:09:32 | bramm: | Running arbitrarily complex programs in verification scripts sketches me out |
02:09:43 | gmaxwell: | As it should. |
02:10:13 | op_null: | bramm: you'll love Ethereum then. |
02:10:17 | gmaxwell: | bramm: I'm not sure if the construction you're using is precisely the same. Ours doesn't require a fiat shamir transform to get a result that has the same expected work to forge as it shows. (though additional sampling or just limtiting how long the jumps it permits prevents you from 'getting lucky' with even low probablity) |
02:10:33 | bramm: | op_null, see my earlier comments on the utility of smart transactions or lack thereof |
02:10:58 | op_null: | Ethereum is scary for other reasons |
02:11:30 | bramm: | gmaxwell, I have a construction with similar overall properties, it does a somewhat different thing for a somewhat different purpose |
02:11:55 | bramm: | well, a very different thing for a very different purpose. It does a similar sampling trick though. |
02:12:11 | bramm: | op_null, how do you find ethereal scary? |
02:12:21 | bramm: | I mean, I don't particularly have confidence in the developers... |
02:13:08 | gmaxwell: | bramm: smart contracts are more useful then you're likely giving them credit for... and also much less useful that 1001 enthusastic doofuses give them credit for. (uh, wtf, replacing internet services?!)... but the reality is that there are relatively few people interested in building production tools with them; not when centeralized services are so much easier to construct and have clear monetization models (e.g. get people to ... |
02:13:14 | gmaxwell: | ... deposit lots of funds then get "hacked"). :) |
02:13:40 | op_null: | bramm: writing consensus code in three languages simultaneously is essentially suicidal. the ones they chose are not ideal either. |
02:14:31 | bramm: | gmaxwell, My point being, you need a lot of samples to get a reasonably high degree of confidence, basically you might as well include everything until your number of samples is well into the thousands. |
02:15:15 | bramm: | op_null, Yikes! That's just bad engineering, ship your fucking product in a reference language to begin with, especially when there's crazy edge cases with how long scripts are allowed to run. |
02:16:32 | bramm: | gmaxwell, Yeah, umm, the biggest problem with bit coin right now is that it scales so poorly that the only way it really works is for people to do their transactions in unregulated banks which claim to be backing their transactions in bit coins, which works about as well as unregulated banks have historically worked |
02:17:34 | bramm: | Well, that and the lack of demonstration of valuable commerce besides speculation happening, but if you're working on the underlying technology you have to assume as a given that that will improve over time. |
02:17:37 | phantomcircuit: | bramm, that's not exactly true |
02:18:53 | gmaxwell: | bramm: have you seen their ... crazy thing about not wanting to have any CISC like macroscopic operations like for slow cryptographic functions? so instead they make it so you can call functions from any past transactions by hash, and then implementations would replace scripted implementations with native ones... but the native ones would use less of the consensus criticitical instruction cycle counter. ... god knows how they plan ... |
02:18:59 | gmaxwell: | ... to verify to any level of assurance that the optimized functions are actually unconditionally identical. |
02:19:13 | kanzure: | are spv clients unregulated banks now? |
02:19:15 | op_null: | bramm: you'd think seeing bitcoin ruby would have put them off it, but who knows. |
02:19:30 | gmaxwell: | bramm: thats not actually a material problem in the bitcoin ecosystem today. I agree its a potential issue in the future, but it's not the enviroment now. |
02:19:48 | gmaxwell: | (people do use unregulated banks, but not due to scaling reasons) |
02:20:02 | gmaxwell: | If someone has told you otherwise, they've misinformed you. |
02:21:07 | bramm: | kanzure, I'm holding different conversations at once, if you mash them together my statements are going to be gibberish |
02:22:17 | gmaxwell: | bramm: I think kanzure was pointing out that you can directly transact in bitcoin using a standard lite wallet (SPV) client, without using an unregulated bank-like entity. |
02:22:26 | gmaxwell: | And that many many bitcoin users do so. |
02:22:31 | bramm: | Oh god. Well, I'm going to cover my ears and go LALALALALA about ethereum development now. The main developer is like 20 years old. Unfortunately for him he's already accepted a bunch of money for ether, which is bad enough because there's a strong implication that he's now selling an unlicensed financial instrument if he's representing that it will have value in the future, and he's still vaporware with all this crap... |
02:22:39 | kanzure: | nah, best to assume i'm just spitting out gibberish |
02:22:46 | kanzure: | |
02:23:16 | bramm: | Yes you can do lightweight transactions in bit coin, but this is basically a subsidized activity. If everybody did it the system would melt. |
02:23:34 | kanzure: | you can use spv clients even if you are running a full client too |
02:23:40 | kanzure: | so everyone can use spv safely |
02:24:30 | kanzure: | i am not sure why you are pwlugging your ears about ethereum. your concerns seem to track everyone else's. |
02:24:32 | gmaxwell: | bramm: I'm not sure why you think this. Yes, the security and incentives of the system depend on there being _many_ users of full nodes. But that doesn't make the use of spv clients subsidized in any substantial way. |
02:25:10 | bramm: | I mean, it wouldn't take much for most of the peers currently running full nodes to fold under the load |
02:25:20 | Luke-Jr: | gmaxwell: I assume he means SPV nodes rely on full nodes doing the verifications of blocks for them. It makes sense. |
02:25:50 | Luke-Jr: | otoh, those full nodes do the same work basically either way |
02:25:50 | bramm: | kanzure, Some people I know are somewhat around it, and I see it resulting in the developers winding up in jail |
02:26:16 | gmaxwell: | bramm: yes, those concerns are concerns others have had too. :( sad. |
02:26:20 | bramm: | Not that there's anything wrong with lightweight wallets |
02:27:18 | op_null: | bramm: you're not alone in thinking that by far. I'm in that boat, they contradict a lot of the "we are just selling fuel" arguments by making blog posts that include the phrase "price of ethereum" and "ethereum exchange rate" |
02:27:31 | gmaxwell: | bramm: following along with the chain requires no more than 14kbit/sec per client maximum by the current protocol rules. There also isn't a trust requirement there, e.g. so single large nodes with lots of bandwidth could support large numbers of spv clients if required. |
02:27:55 | gmaxwell: | It's also technically possible for spv clients to relay to each other, though no one bothers implementing this today. |
02:29:04 | bramm: | gmaxwell, the problems come up when you increase the transaction rate by orders of magnitude (the current rate is kind of sad). Then the lightweight wallets keep working but the full nodes become alarmingly centralized. |
02:29:22 | gmaxwell: | In any case, fair enough that you understand it... but no one today is being forced into using any banklike service on account of it. SPV clients work fine now, and have worked fine since they've existed. e.g. there haven't been any incidents where people had to stop using them. |
02:29:52 | gmaxwell: | Sure, if the rules of the system were changed to unhinge the limits it might immediately become completely centeralized. |
02:29:57 | gavinandresen: | * gavinandresen groans at the “we’re doomed to centralization if we’re successful” argument |
02:30:43 | gmaxwell: | (the current limits being sad is debatable, ... I mean, we currently can handle more daily volume than the bank of england does... :P but those are mostly disputes over which benchmark to use). |
02:30:54 | Luke-Jr: | bramm: we can only speculate on what might happen when we begin hitting the current transaction rate limits |
02:31:02 | bramm: | Also, just because something isn't an active scaling problem right now doesn't mean it isn't having an effect no what's going on right now. There might be SOME PEOPLE who have considered doing things and not done them to avoid causing huge problems |
02:31:27 | bramm: | gavinandresen, I'm not portending doom, I'm discussing technical challenges |
02:31:28 | Luke-Jr: | bramm: maybe those things are a bad idea? |
02:31:39 | Luke-Jr: | especially if they don't need to do them.. |
02:31:45 | op_null: | I don't think many people stop doing crappy things in order to save the network some trouble. look at how big satoshi dice was for a while. |
02:31:48 | gmaxwell: | bramm: yes, sure, but I'm speaking to you with as much expirence in this space as anyone and I can assure you that that kind of sophicated thinking is not a major factor. |
02:32:00 | gmaxwell: | it's been brutally hard to get people to consider things like that in the past. |
02:33:40 | bramm: | Well, it's been brutally hard to discuss it with the bad actors. The good actors you might never have heard about, because they were too polite |
02:33:40 | gmaxwell: | bramm: there are additional scaling tools available that require no fundimental change to bitcoin that improve the scale vs decenteralization tradeoff. Though absent a demand most have not been developed yet beyond the initial ideas and requirements, e.g. fraud proofs for fractional validation https://en.bitcoin.it/wiki/User:Gmaxwell/features#Proofs |
02:33:57 | bramm: | kanzure, we had an acronym collision in the discussion a bit earlier |
02:35:34 | gmaxwell: | * gmaxwell bbl |
02:35:37 | bramm: | Oh god, does satoshi dice use the hash of the root to decide who gets a coin? |
02:36:10 | op_null: | no, they used a blinded nonce |
02:36:57 | op_null: | post a hash of the nonce, hash the incoming TX with the nonce, if it's above a target they win, after the day is over post the nonce list. something like that. |
02:37:14 | bramm: | Still, it's an abuse of the expressiveness of the language for accepting coins |
02:38:01 | op_null: | oh it was totally abusive. their transactions made up half the block chain at one point. |
02:38:45 | op_null: | 9940075 total satoshi dice transactions |
02:38:50 | gmaxwell: | well, 'inefficient' ... the half wouldn't be a concern as much has the half of that half which consistented of nothing but 1e-8 "you lost" payments. |
02:38:53 | op_null: | 4.1GB |
02:39:13 | bramm: | Ouch |
02:39:16 | op_null: | in july 2013 they made up 54% of the entire block chain size. |
02:39:22 | gmaxwell: | but its more complicated than it seems; e.g. their transaction volume ended immediately when they were no longer a publically traded asset. |
02:39:42 | op_null: | uh |
02:40:04 | gmaxwell: | So I personally think its likely likely that most of the activity was just manufactored to pump the valuation of their 'cryptostock' (by whom is anyone's guess). |
02:40:46 | op_null: | I might have to write some scripts to see if I can identify that sort of action. their behaviour is so obvious and the volumes so high that it would be hard to obscure something like that. |
02:40:54 | kanzure: | how would you "agree" on "the appropriate number of full nodes" on the network anyway? re: scalability. someone has to have the full blockchain... at least one person. and idealy not just one person. |
02:41:05 | gmaxwell: | bramm: yea, it's still exploitable, though. since you just look to see if you'd lose and selectively double spend. It also doesn't prove what many people have thought it proved. E.g. the operator was free to play himself and pick the outcome. Which would be especially bad news for 'shareholders', since if the site ran lucky funds could be embezzled via preselected. |
02:41:22 | gmaxwell: | kanzure: hm? no one technically needs to have the whole thing. |
02:41:32 | kanzure: | are you sure |
02:41:41 | kanzure: | like what if i want to see all transactions anyway |
02:41:48 | bramm: | gmaxwell, It *could* be implemented properly, although it would still be a stupid thing |
02:41:57 | gmaxwell: | kanzure: too bad? system works fine even if you can't. :) |
02:41:59 | gmaxwell: | bramm: yep. |
02:42:36 | wallet421: | wallet421 is now known as wallet42 |
02:42:56 | kanzure: | gmaxwell: i'm out at dinner with andytoshi and he said the exact same thing [3~ |
02:43:00 | kanzure: | andytoshi and i believe that you are a long-range telepath |
02:43:07 | gmaxwell: | kanzure: absent recursive snarks (which make everything nicely O(1) if their security holds), you still only need to witness the history to build a current state for forward evaluation. That doesn't require that any one person have the whole thing, e.g. you can go gather it from many hosts and build your state and forget it. |
02:43:10 | bramm: | kanzure, it would always be better to have less bandwidth necessary to keep up/start and be able to run a partial rather than full node |
02:43:29 | phantomcircuit: | gmaxwell, i hadn't actually thought of the site operator gaming the system |
02:43:34 | phantomcircuit: | that's interesting |
02:43:59 | kanzure: | bramm: so in your ideal environment, nobody would have the entire blockchain? who would have parts of it? |
02:44:10 | gmaxwell: | kanzure: and once you have a state, see the fraud proofs writeup I gave... it's possible to just trust updates to the state, and randomly verify them... such that so long as you're not partitioned from honest hosts you will learn of any misbehavior with arbritary confidence. |
02:44:15 | kanzure: | s/environmet/insert correct word here |
02:44:16 | op_null: | phantomcircuit: it's been done. some of the "dice" sites did it really badly and got caught incrementing nonces in funny ways to avoid winning ones. |
02:44:20 | bramm: | kanzure, People would have as much of it as they could reasonably handle |
02:44:30 | kanzure: | isn't that how it already works |
02:45:33 | gmaxwell: | kanzure: not really because the pruning knob isn't exposed to users, and we have no p2p facility for gattering a blockchain from sparse peers. But it will, it's the design... |
02:45:49 | gmaxwell: | e.g. no consensus changes needed for that. |
02:46:24 | bramm: | gmaxwell, I'm not sure that it can be done quite right, I need to read through your comments on proofs which can be done to have an opinion |
02:46:29 | gmaxwell: | To not have to even few whole blocks at the tip requires fraud proofs, which need a few additional commitments. (or an interactive protocol with your peers) |
02:46:37 | bramm: | Obviously building a new block chain from scratch it could be done very well |
02:46:51 | bramm: | (for some definitions of 'obvious' :-) ) |
02:47:01 | phantomcircuit: | gmaxwell, i think you accidentally a word there |
02:47:44 | gmaxwell: | bramm: I mean some of this was considered in the original design of bitcoin. E.g. section 7 explains why you need not store everything, section 8 on spv clients points out "prompting the user's software to download the full block and alerted transactions to |
02:48:01 | gmaxwell: | confirm the inconsistency" but that isn't actually efficient in bitcoin as it is today, not without additional comittments in the blocks. |
02:48:14 | zooko: | gmaxwell: I like your idea of selecting peers by short rtt. |
02:48:28 | zooko: | What's wrong with the topology is clumpiness, right? |
02:48:30 | zooko: | Or something else? |
02:48:35 | gmaxwell: | zooko: well it can't be used exclusively, as it results in degenerate topology. |
02:48:43 | gmaxwell: | yea, it is really prone to partition. |
02:49:29 | gmaxwell: | e.g. if you take the n lowest rtt... well as soon as you have >threshold peers in north america they helpfully partition themselves from europe completely. :P |
02:49:31 | bramm: | The nearest neighbor heuristic isn't nearly as bad for a DHT because a DHT has leaves which ensure non-clumpiness |
02:50:10 | gmaxwell: | but it's a fine thing to do for some subset of your connections, so long as there isn't a feedback loop. |
02:51:07 | gmaxwell: | e.g. the probablity that you accept a connection for your {most favororite netblock} slots cannot be proportional to {also connected to my RTT prefered hosts}. |
02:51:25 | gmaxwell: | Actually being sure that no such feedback loops exist is one reason I've not run headlong into implementing this stuff. |
02:51:42 | bramm: | gmaxwell, Better to keep it simple and only reserve a set number of slots for near RTTs |
02:51:54 | bramm: | gmaxwell, section 7 of what? |
02:52:11 | gmaxwell: | there are some very nice poisoning attacks for address rumoring systems with limited state where an attacker can exponentially boost their concentration of slots. |
02:52:20 | gmaxwell: | bramm: the bitcoin whitepaper https://bitcoin.org/bitcoin.pdf |
02:53:40 | gmaxwell: | We've implemented state pruning in bitcoin core-- and you can happily run a node with a gig of space--, but its not exposed yet; in part because we haven't yet done the p2p support for gathering up the blockchain from a network where not every full node has the full chain. |
02:53:53 | bramm: | gmaxwell, Yes of course merkle trees should be used. It would be preferable if their hashes were in the actual transaction log so you could catch up from a point not at the very beginning of time based on them. |
02:54:49 | gmaxwell: | bramm: hm? you can start in the middle. SPV clients do. |
02:55:06 | bramm: | Then how do they know what the state was at the point they start? |
02:55:51 | tromp_: | faith in the accumulated pow difficulty since |
02:55:59 | gmaxwell: | A SPV client is only concerned about its own wallet(s), so they start syncing from the point where their wallet was created, and they know they had no transactions before they existed. |
02:56:42 | bramm: | Oh I see. Still it has limitations where it actually has to sync from that point instead of being able to sleep for a while and then wake up and catch up quick |
02:56:54 | gmaxwell: | They do pull older headers, but they're 80 bytes each, so whoptie do... they need them for total work comparisons, compact proofs could be used in theory but it hardly seems worth it since ten years of headers is only 40 megs. |
02:57:04 | op_null: | doing an SPV sync is ridiculously fast. |
02:57:20 | gmaxwell: | * gmaxwell actually goes to dinner now |
02:57:21 | op_null: | "catching up" isn't really a problem in any sense, even with remote peers. |
02:57:32 | op_null: | it's a massive *privacy* problem, but not a speed one. |
02:58:02 | bramm: | In any case, everything is better if headers include references to hash roots of merkles trees of the current state |
03:05:07 | bramm: | tromp_, I meant the state of their wallet, not the hash root |
03:06:24 | tromp_: | yes, i realized that after gmaxwell correct answer |
03:07:46 | bramm: | tromp_, I'm trying to read through your cuckoo docs now but I've been too busy chatting on irc |
03:08:23 | tromp_: | yes' i figured you might have reached the end of the abstract by now |
03:10:14 | bramm: | It's good to have found the place where there's coherent discussion of block chain engineering. The discussion in most forums is utter drivel. |
03:10:46 | tromp_: | your proposal is very similar to Momentum, where they require a full collision H(A)=H(B), with an additional difficulty constraint on H(A,B) |
03:11:11 | tromp_: | that version can be seen as a special case of Cuckoo Cycle |
03:12:13 | bramm: | I basically suggested making it 4sum instead of 2sum to lower CPU load. Other than that what I suggested is not very clever. |
03:13:11 | bramm: | I'd be very curious to hear commentary on what operations require almost as much power on ASICs as on CPUs. |
03:15:20 | bramm: | tromp_, gmaxwell was showing... skepticism that requiring requiring more memory is better than more CPU on the grounds that ASIC miners spend more on power than on CPU |
03:15:31 | bramm: | than on hardware I mean |
03:18:47 | tromp_: | i'd like to see the numbers on that. how long do you have to run the most efficient ASIC miner to have spent as much on power as on hardware? |
03:19:00 | op_null: | not long. |
03:19:27 | tromp_: | has anyone written down such a calculation? |
03:19:42 | op_null: | SP20 is 1.3TH/s at 1152W. costs 795 USD. |
03:20:26 | bramm: | op_null, how much time is that? |
03:20:47 | bramm: | And how many general purpose CPUs is it equivalent to? |
03:21:37 | op_null: | pretend 20 US cents / kw/h. so $5.5296 a day. |
03:22:08 | bramm: | Equivalent to how many CPUs? At what marginal cost to produce each one? |
03:22:15 | op_null: | so 143 days for the power costs to exceed the capital cost for that miner. |
03:22:27 | bramm: | I've been told the cost of designing an ASIC is around $1 million |
03:22:30 | op_null: | depends a lot on the CPU and the workload. |
03:22:42 | op_null: | depends a lot on the ASIC and the design. |
03:22:50 | tromp_: | 143 days is on the order of hardware depreciation time |
03:23:18 | fenn: | dumb question, why do bitcoin ASICs depreciate? |
03:23:27 | bramm: | Yeah that doesn't look like there's a clear winner |
03:23:34 | bramm: | fenn, moore's law |
03:23:43 | op_null: | a core i7 4770 has a TDP of 95W, but it's probably more like 200W with minimum accessories. |
03:23:49 | tromp_: | fenn, try making money with 1-year-old asics... |
03:24:17 | tromp_: | they're pretty much obsolete in terms of hash/watt |
03:24:33 | op_null: | bitfury stuff isn't that far behind. |
03:24:44 | fenn: | is it because smaller transistors use less power? |
03:25:05 | tromp_: | yes, that's the major reason |
03:25:10 | op_null: | partly. process size isn't everything. |
03:25:14 | bramm: | fenn, yes the two things which are still dropping are power needed to do an operation and time to fetch stuff from memory |
03:25:29 | bramm: | clock speed has basically stopped improving |
03:25:34 | op_null: | KNCminer has a 20nm process ASIC that's woefully higher power than the 50nm Bitfury |
03:26:00 | op_null: | bramm: ASIC miners for Bitcoin don't have memory. |
03:26:00 | tromp_: | reduced leak current in better processes is another reason |
03:26:22 | bramm: | op_null, true but if they did cuckoo they would |
03:27:27 | tromp_: | then they'd incur a 50ns hit when switching rows in a memory bank |
03:27:36 | op_null: | bramm: I can't say anything about an ASIC that doesn't exist and hadn't been designed. |
03:28:03 | tromp_: | assuming use of commodity DRAM |
03:30:05 | bramm: | presumably ASIC DRAM would depreciate similarly to commodity DRAM |
03:30:58 | tromp_: | there's no such thing as high capacity affordable custom DRAM |
03:31:23 | bramm: | tromp_ Why is that? |
03:32:15 | tromp_: | there's billions of technology investment in the current DRAM markets |
03:32:45 | bramm: | The same can be said for CPUs but you can make an ASIC which can beat those at specific tasks for a million |
03:33:14 | tromp_: | because the ASIC is orders of magnitude more efficient |
03:34:19 | bramm: | Not sure what you mean. What's different between DRAM and aspics? |
03:34:28 | tromp_: | but if you spend millions to develop a custom DRAM and they're only 10x more efficient but cost 100x more due to lacking economies of scale... |
03:34:44 | tromp_: | then they're still not cost effective |
03:35:46 | tromp_: | with a memory hard pow, dram costs dominate the mining |
03:36:08 | tromp_: | so you need not only be more efficient than commodity dram, but also price competitive |
03:37:02 | tromp_: | that may require on the order of a billion $ to develop |
03:39:32 | tromp_: | i don't know of any ASICs developed with more than a few MB |
03:41:30 | tromp_: | cuckoo can also scale to require multiple memory chips, so that you cannot embed all computing logic within one chip |
03:41:44 | bramm: | A person who knows ASICs told me that you can put memory on them. I should ask him for more color on that. |
03:42:04 | tromp_: | yes, you an. scrypt asics have on the order of 1MB on them |
03:42:35 | tromp_: | but cuckoo requires at least 2 orders of magnitude more, for each single instance |
03:44:25 | tromp_: | the point of cuckoo is that it (appears to) reduce to just doing random memory accesses |
03:44:51 | tromp_: | with almost all computation just being cheap hashes to make the memory accesses random |
03:46:16 | tromp_: | so if you can do that vastly more efficiently with some custom ram, then you may have just improved a large class of applications |
03:46:51 | bramm: | To my mind if you're going to make a proof of work rely on memory, it should be a reasonable amount of memory, like 1 gig or 10 gigs |
03:47:24 | tromp_: | i planned to, but cuckoo takes a lot of time on 1GB |
03:47:38 | bramm: | Define 'a lot of time' |
03:47:41 | tromp_: | each attempt taking a few dozen seconds |
03:48:07 | fenn: | our future robot overlords are laughing at 1GB being "a lot of memory" |
03:48:15 | bramm: | That's an interesting thing to work into engineering, but hardly a disaster |
03:48:22 | bramm: | As long as verification is fast... |
03:48:31 | tromp_: | cuckoo can scale in memory size dynamically |
03:49:12 | tromp_: | but 10GB is currently only feasible if your block interval is like an hour or mor |
03:49:49 | tromp_: | the single attempt time has to be significantly less than block interval time |
03:49:52 | bramm: | Maybe you could give a bonus to amount of work based on how much memory it used |
03:50:00 | tromp_: | to have some semblance of progress freeness |
03:50:12 | bramm: | Not clear how that function should scale though |
03:50:20 | tromp_: | yes, bramm, the paper proposed that |
03:50:44 | bramm: | log(n) might be appropriate |
03:50:47 | tromp_: | in the style of myriad coin |
03:51:00 | bramm: | What function does myriad coin use? |
03:51:04 | tromp_: | i only proposed a constant number of sizes |
03:51:22 | tromp_: | it has 5 different pows, each with its own independent difficulty |
03:51:41 | tromp_: | so if one powe gets used more heavily, it will becomes more difficult to copensate |
03:51:55 | tromp_: | in the end each pow should end up with freq 20% |
03:53:19 | bramm: | I'll have to digest that later, right now just trying to get what the basic pow is |
03:54:09 | bramm: | By 'takes an hour' does that mean always takes an hour, or takes on average an hour with equal likelihood at any given moment of finishing? |
03:54:30 | tromp_: | pretty much always (for fixed hardware) |
03:54:50 | tromp_: | if you find a solution, it will always come near the end of the attempt |
03:55:43 | tromp_: | well, 1 hr for the hypothetical block interval. the single attempt time could be something like 5 mins |
03:58:11 | bramm: | Trying to parse through the paper now. If I understand correctly, you're looking for circular chains? |
03:58:23 | tromp_: | they're called cycles:) |
03:58:32 | tromp_: | a standard notion in graph theory |
03:58:57 | bramm: | And how many edges are there? Is it, like, half of all possible ones, or does each node have roughly a constant number, or what? |
03:59:14 | tromp_: | there are half as many edges as nodes |
03:59:23 | tromp_: | e.g. 2^32 nodes, 2^31 edges |
03:59:49 | tromp_: | each node has expected degree 1 |
04:00:13 | tromp_: | this makes the expected number of cycles constant |
04:00:18 | bramm: | And that's an exact value, enforced by making edges using a PRF seeded with the challenge? |
04:00:40 | tromp_: | yes, number of nodes and edges is exact |
04:00:47 | bramm: | When you say 2^32 nodes, do you mean that many nodes in the whole graph or only one half of the bipartite graph? |
04:00:55 | tromp_: | in whole graph |
04:02:12 | bramm: | Are you looking for a fixed size cycle or is bigger better? Is there an exponential drop off in the chances of there being a cycle of a given length? |
04:02:17 | tromp_: | fixed size |
04:02:35 | tromp_: | yes, there's a plot of distribution of cycle lenghts |
04:02:43 | bramm: | What is the fixed size? Is it related to the amount of nodes? |
04:02:48 | tromp_: | i propose 42 |
04:03:05 | tromp_: | but it's up to the pow implementor really |
04:03:24 | tromp_: | i argue why it should be over 10 |
04:03:41 | tromp_: | maybe i'll llet you read the whole paper:) |
04:03:57 | bramm: | Is the expected length of the largest cycle logarithmic on the number of nodes? |
04:04:19 | bramm: | Getting answers from you is very helpful for context. I have a lot of trouble reading math notation. |
04:04:24 | tromp_: | i don't know, but it could well be something like that |
04:05:45 | tromp_: | with sizes around 2^30, the largest cycles i see is several thousand |
04:05:51 | tromp_: | so it seems more than logarithmic |
04:06:06 | tromp_: | but maybe its log^2 or something... |
04:06:53 | tromp_: | well, to see several thousand you'd need to try many graphs... |
04:07:22 | bramm: | Interesting. It isn't at all obvious to me what the best algorithm is for this, so I'll have to digest the paper |
04:07:47 | tromp_: | i would think that largest size is log(N)^Theta(1) |
04:07:58 | bramm: | Those large numbers would seem to imply that there are outliers, so you can't just credit someone with more work if they find a longer graph |
04:08:04 | bramm: | longer cycle I mean |
04:08:37 | tromp_: | cycle length must be fixed before the attempt |
04:08:54 | tromp_: | for each graph only one size will be accepted |
04:09:19 | tromp_: | this gives a few % of probability of success |
04:10:37 | tromp_: | the paper doesn't consider using multiple cycle lengths in one PoW |
04:10:53 | tromp_: | i don't see a use for that |
04:13:24 | zooko: | I'm currently planning to use cuckoo PoW |
04:13:25 | zooko: | . |
04:13:58 | bramm: | So it seems like best use for this is to have a fixed cycle length based on the amount of memory and have the 'value' of the pow be how many zeros its hash ends in |
04:14:55 | tromp_: | the difficulty is controlled by a 256bit target |
04:15:22 | tromp_: | where the 42 nonces must hash to something less than the target |
04:15:32 | bramm: | Yes that's what I meant |
04:15:38 | tromp_: | so you have some finer resolution than just #zeroes |
04:16:02 | zooko: | I had a conversation with nwilcox and amiller about it at a Hack Fest at my house last week, and I came up with a more specific argument for why I want a stronger PRF than SipHash-with-attacker-chosen-key. |
04:16:05 | tromp_: | but you were right if you considered fractional number of zeroes:) |
04:16:58 | tromp_: | what was the argument, zooko? |
04:17:19 | gmaxwell: | "< bramm> tromp_, gmaxwell was showing... skepticism " yes, I just don't think it's resolved. I don't know. The initial argument to promote memory hard functions is the scrypt paper, and analysis is clearly lacking because it focused only on mfgr costs. Colin Percival agreed with my opinion that it was short in that respect; but he said he didn't know how to estimate energy usage. Well, sadly, I don't really either except on ... |
04:17:26 | gmaxwell: | ... materialalized designs... and the initial results of ltc scrypt vs sha256 don't bode well for the memory hard camp, but its easy to dismiss the ltc scrypt numbers as 'not memory hard enough'. |
04:18:26 | zooko: | tromp: It is that there is a class of attacks which are prevented by the random graph being random. |
04:18:50 | zooko: | And if I understand correctly, the randomness of it is enforced by the hash function being a good PRF in the attacker's hands. |
04:19:22 | bramm: | gmaxwell, Yeah script is hobbled by being a password hashing function rather than a proof of work function so the amount of memory it can require is fairly small |
04:20:58 | zooko: | tromp: When you and I last talked about this, you pointed out that real hash functions, e.g. BLAKE2, emit too much output, and there isn't a good way to use all of that output. |
04:21:07 | zooko: | But I later thought "Wait, that's no problem, just truncate." |
04:21:13 | tromp_: | zooko, the graph is still only pseudo random even with a blake2 hash |
04:21:30 | zooko: | Um, I don't think we're talking the same language yet. |
04:21:40 | zooko: | "pseudo random" is not a word in my lexicon. :-) |
04:21:46 | tromp_: | cuckoo aims to limit computation relative to memory accesses |
04:21:47 | bramm: | Would using a cryptographic hash function dramatically increase the amount of CPU needed or is it not a major factor? |
04:21:58 | zooko: | So, what I mean is, there are conceivable ways to break cuckoo, if you can choose your own graph structure. |
04:22:24 | zooko: | I think the security of cuckoo is much improved by the attacker not being able to choose that. |
04:22:24 | tromp_: | you cannot choose your own graph structure with siphash |
04:22:32 | zooko: | And I think SipHash was not designed to prevent that. |
04:22:52 | zooko: | It may indeed prevent it, but it hasn't been analyzed by cryptographers with an eye to whether it prevents it. |
04:22:57 | bramm: | Getting a handle on the size of this, I guess the size of a pow is the length of the seed, plus the lengths of specifying all of the nodes in the cycle, which is 42 times what? |
04:23:01 | zooko: | Contrasted with secure hash functions, which have. |
04:23:26 | tromp_: | there's no conceivable attack on siphash that lets you find large cycles in graphs on a billion nodes any easier |
04:23:46 | zooko: | That's not the sort of argument that cryptographers like. |
04:23:52 | bramm: | True, cycle finding is itself a form of cryptographic hashing |
04:23:53 | tromp_: | i know |
04:24:18 | tromp_: | it's a "gut" argument:( |
04:24:24 | zooko: | Cryptographers like the sort of argument that "any algorithm which would crack cuckoo would have to violate the pseudorandom property of the underlying hash function". |
04:24:44 | gmaxwell: | wrt miner power, e.g. Bitmain S3 is $199 retail for 453GH/s, and 355W at the wall. so energy and price are equal at $117. Though this is distorted a bit by markup. On actual cost the numbers are much lower, and on parts sold in a competative marketplace (e.g. gpus) you see numbers in 30-60 days range. In any case, the operating costs are considerable. So there are a couple of interesting avenues that come out of this: e.g. ... |
04:24:45 | tromp_: | yes, proof size would be 42*4 = 168 bytes |
04:24:50 | gmaxwell: | ... centeralization advantages for amortized hardware; improvements in density (right now sha256 parts are largely thermally limited). I'd like to see more study in this area; otoh POW functions are some of the least interesting / superficial elements of these systems. |
04:25:00 | bramm: | I guess if there are 2^32 nodes, that's 31 bits for each one because you have the bonus from it being bipartite? Minus a few more for compression cleverness? |
04:25:02 | tromp_: | more if the graph size exceeds 2^33 |
04:25:37 | zooko: | The top-tier secure hash functions have been studied rather intensively by the symmetric-crypto crowd, because of the SHA-3 process, and any measurable deviation from random got published as a result for other cryptographers to study. |
04:25:42 | zooko: | That hasn't been done for SipHash. |
04:26:10 | zooko: | And in fact the remaining top tier ones, e.g. the SHA-3 finalists, are only the ones for which no deviation from random could be found, even in reduced-round subsets. |
04:26:12 | bramm: | zooko, the kinds of attacks you worry about in the design of cryptographic hash functions simply don't apply in this case |
04:26:58 | zooko: | bramm: We don't know that you can't break cuckoo PoW by manipulating the graph structure. |
04:26:59 | tromp_: | zooko, you can attack individual edges in a graph that has a billion |
04:27:12 | bramm: | tromp_, As a memory-lookup-saving measure, in addition to filtering out the nodes of degree one, couldn't you optimize out nodes of degree 2? |
04:27:13 | zooko: | Or otherwise taking advantage of some pattern or malleability of the hash function. |
04:27:33 | gmaxwell: | I like to point out that MD5 would be a fine pow... but I'm less clear on that argument in the case of the interior hash in the cuckoo cycle. Optimization freeness is important, and it may be that there is some interesting bias that can give you an exploitable optimization. |
04:27:43 | bramm: | zooko, The existence or not of cycles is itself a form of cryptographically secure hashing |
04:27:47 | tromp_: | bramm, nodes of degree 2 can be part of the cycle |
04:28:10 | zooko: | gmaxwell: Yeah, I'm concerned that SipHash has not been analyzed for the security properties that Cuckoo seems to need. |
04:28:16 | bramm: | tromp_, right but you can switch to making edges have length |
04:28:55 | gmaxwell: | zooko: hey, it's cryptocurrency, and particularly non-bitcoin cryptocurrency I'm pretty sure that means that you're free from any expectation to evaluate the security of your construction. |
04:28:59 | gmaxwell: | :-/ |
04:29:02 | zooko: | gmaxwell: And also I think that BLAKE2 would be acceptably efficient as a replacement for SipHash in Cuckoo, and it has been so studied. |
04:29:03 | tromp_: | zooko: if it makes you happier, you can consider Cuckoo being parametrized on internal hash functoin, and substitute your own:) |
04:29:06 | bramm: | The only security property cuckoo needs is that you didn't manually put a cycle in there |
04:29:44 | zooko: | tromp: thanks! That does make me happier, and indeed what I was planning to do, but I want to suggest it to you, both to get other people who hear about it from your publications to study it, and in order to warn you that it may be a real problem. |
04:29:44 | bramm: | That said, it *might* be the case that even shoving in a cryptographic hash function doesn't make it CPU bound anyway |
04:29:47 | tromp_: | bramm: i really want edges to havbe length 1 |
04:29:50 | gmaxwell: | tromp_: but is it that simple, I mean, if you put a much slower internal hash function your progress freeness suffers. |
04:30:03 | zooko: | bramm: We don't know that there aren't other ways to break cuckoo than that way. |
04:30:26 | bramm: | tromp_, but it might save you half your memory lookups to allow edges to have length 2 or more and optimize out the edges of degree 2 as a first pass and put them back in at the end |
04:30:27 | zooko: | bramm: It definitely doesn't. BLAKE2 instead of SipHash-2-4 would probably only double the CPU cycle count. |
04:30:44 | tromp_: | yes, gmaxwell, it would clearly affect many cuckoo properties |
04:31:09 | zooko: | gmaxwell: if that turns out to be a problem then we'll probably used a reduced-round BLAKE2. |
04:31:20 | gmaxwell: | zooko: IIRC with the parameters tromp_ was talking about it was already somewhat edging on progress freeness problems. |
04:31:31 | tromp_: | i doubt it, zooko, considering blake2 has to output 256 bits?! |
04:31:44 | gmaxwell: | But I'm not sure how to 'measure' sufficient progress freenewss, which makes a decision other than 'very progress free' harder to justify. |
04:32:02 | zooko: | tromp: no, just truncate the output. |
04:32:17 | bramm: | zooko, the problem which cuckoo is based on is based much more directly on a fundamental problem in computer science than the core of secure hash functions |
04:32:26 | bramm: | zooko, or use one output to define multiple edges :-P |
04:32:30 | zooko: | gmaxwell: hm, I had thought I could measure it by whether the concrete performance is "sufficiently" less than the block latency... |
04:32:44 | tromp_: | but you still need to compute many more (internal) bits, zooko |
04:32:47 | bramm: | And why do you want to use blake2 instead of sha3? |
04:32:48 | zooko: | bramm: yeah, tromp and I talked about that multiple-edges thing a bit, but I couldn't see a nice clean way to do it. |
04:32:52 | zooko: | That would be great, actually. |
04:33:34 | zooko: | bramm: I liked BLAKE more than I liked Keccak, so I collaborated on the definition of BLAKE2. |
04:33:41 | zooko: | Relevant to this usage, it is faster than Keccak in CPU. |
04:33:49 | gmaxwell: | bramm: thats a bit handwaving. You don't have to break the normal graph thoretic properties to have a very bad day as a proof of work. (e.g. an attacker discovering they can approximate your work function and get a substantial constant factor speedup, may not tell you anything about cockoo graphs ... but it can ruin your security) |
04:33:50 | zooko: | Here is my rationale for BLAKE2: https://blake2.net/acns/slides.html |
04:34:05 | zooko: | Also relevant to this usage, BLAKE was studied a bit better than Keccak was studied in the SHA-3 process. |
04:34:12 | tromp_: | zooko, if you implement blake2 with 64bit ops, how many such ops do you need? |
04:34:21 | zooko: | gmaxwell: Now you're talking. :-) |
04:34:30 | bramm: | To define an edge you need 64 bits (well 62 but whatever) so 256 bits can define 4 edges straightforwardly |
04:34:31 | zooko: | tromp: you mean in CPU? |
04:34:51 | zooko: | * zooko looks at http://bench.cr.yp.to/results-hash.html |
04:34:58 | tromp_: | bramm, that's a very bad approach, as i explained to zook o before |
04:35:37 | bramm: | Presumably if you reduce the rounds of blake2 enough its security properties and cpu usage look a lot like sip except that it has a lot more bits of internal state |
04:35:46 | zooko: | I think it takes about 350 cpu cycles to run BLAKE2s on a 64-bit CPU. |
04:35:54 | zooko: | On a small input, I mean, less than one block. |
04:36:05 | zooko: | bramm: Yes, that's true. |
04:36:10 | zooko: | SipHash might be fine. :-/ |
04:36:33 | tromp_: | i even considered using siphash12 instead of siphash24 :) |
04:36:40 | zooko: | *nod* |
04:36:48 | tromp_: | it didnt affect the cycle length distribution |
04:36:50 | zooko: | I think it takes something like 140 CPU cycles to run SipHash-2-4. |
04:36:54 | zooko: | tromp, can you confirm that? |
04:37:00 | bramm: | tromp_, do you mean the multiple edges in an output thing is a bad idea or optimizing out edges of degree 2 is a bad idea? |
04:37:22 | zooko: | bramm: I suggested the same thing, BLAKE2 is generating 256 bits of output, and an edge needs only 64, so generate 4 edges. |
04:37:38 | tromp_: | using a 256 bit hash output to define 4 edges is a bad idea, bramm |
04:37:57 | bramm: | gmaxwell, The problem cuckoo is based on really is a coherent understandable problem. The problems which the core of cryptographically secure hash functions are based on amount to 'looks like it mashes it up real good and we haven't been able to break it' |
04:38:04 | tromp_: | because in the edge trimming phase you end up only using one of the 4 |
04:38:31 | zooko: | tromp: I don't understand that, now. |
04:38:38 | bramm: | Do you have to recalculate edges later, or are they all done up front? |
04:38:49 | tromp_: | did you read the section on edge trimming, zooko? |
04:38:49 | maaku: | maaku is now known as Guest100 |
04:39:05 | zooko: | I'll go look at the current draft now ... |
04:39:15 | tromp_: | it processes only the "alive" nonces, which becomes sparser and sparser |
04:39:29 | zooko: | Is this the right link? https://github.com/tromp/cuckoo/blob/master/cuckoo.pdf?raw=true |
04:39:38 | gmaxwell: | zooko: '"sufficiently" less than the block latency' I agree but I do not know how to quantify 'sufficiently', except in very broad numbers. E.g. functions whos execution is a small fraction of the latency skew between nodes on the network would be hard to argue as potentially problematic. (if they were, then latency between network nodes would be more problematic). Is 1 second okay against a 10 minute block? against a 10 minute ... |
04:39:39 | tromp_: | bramm, you generate edges many tims |
04:39:45 | gmaxwell: | ... block a very fast centeralized party could have a .16% advantage (e.g. if it took them no time to prove, and everyone else 1 second). Is that okay? I dunno. |
04:40:00 | tromp_: | bramm but in later rounds, you generate smaller and smaller subets |
04:41:18 | zooko: | gmaxwell: Hm. |
04:41:50 | tromp_: | gmaxwell, i thought 10s attempts in a 10min block time was quite ok |
04:41:57 | tromp_: | which is already several % |
04:42:09 | gmaxwell: | bramm: You can keep repeating that, but the repetition won't enlighten me. The security/incentives considerations are much simpler than the 'underlying problem' (e.g. you do not and are not likely to find a strong reduction argument that the security and economics of the cuckoo work function in application as a proof of work reduce to any well studied problem) |
04:42:12 | tromp_: | so that's a price you have to pay with cuckoo |
04:42:33 | gmaxwell: | it's quite easy to construct POW functions based on well studied things which would be weak in practice, and people have done so many times on the forum. |
04:42:45 | zooko: | tromp: I really fail to understand this: |
04:42:48 | zooko: | “Next, for all alive edges, compute its even endpoint and increase the corre- |
04:42:48 | zooko: | sponding counter, capping the value at 2. Next, for all alive edges, compute its even endpoint and if |
04:42:48 | zooko: | the corresponding counter is less than 2, set the edge to be not-alive.” |
04:43:22 | tromp_: | zooko: i'm identifying leaf nodes among the current set of live edges |
04:44:01 | tromp_: | so i compute degrees, and check for degrees equal to 1 |
04:44:03 | gmaxwell: | E.g. there was one proposal that was to generate randomized boolean satisfiability... arguing that it was 'secure' because 3-sat is NP-complete. But ... most sat problem instances are not hard, and you could simply grind the instance generator until you got a problem that a hurestic solver could immediately answer. |
04:44:18 | tromp_: | "less than 2" ight be confusing since it will be at least 1:( |
04:46:03 | zooko: | tromp: ah |
04:47:03 | tromp_: | bramm: i dont understand what you meant with "optimizing out edges of degree 2" |
04:47:07 | zooko: | gmaxwell: it reminds me of the tradition of insecure public key cryptosystems based on classic hard problems. |
04:47:11 | bramm: | gmaxwell, As someone who's a bit of an expert in boolean satisfiability, I can tell you that random satisfiability is an absolutely horrible problem, but clique and related (for example, cycle finding) are quite good |
04:47:40 | bramm: | tromp_, I think I need to digest the core algorithm before I know if I'm talking sense on that one |
04:47:42 | zooko: | What classic problem is cuckoo most like, and how does it differ? |
04:47:57 | tromp_: | my intuition (and experience as a theoretical computer scientist) agrees with bramm's |
04:48:14 | bramm: | Trust us, we're scientists :-) |
04:48:34 | gmaxwell: | Funny, because thats not what you sound like here. |
04:48:35 | bramm: | It's funny how little crossover there is between programmers and theoretical computer scientists |
04:49:31 | bramm: | gmaxwell, that's because you're judging based on your prejudices of how you expect scientists to talk |
04:49:41 | tromp_: | if i don't sound like one, it's because i cannot prove any security properties of cuckoo |
04:49:52 | gmaxwell: | Because how tidy a particular problem is ... is more or less uncorrelated with the security of a particular algorithim based on it in a particular enviroment. Especially when the notion of security hinges heavily on economic arguments and can be broken throughly by small constant factor considerations. |
04:50:25 | bramm: | gmaxwell, certain problems tend to have similar average case and worst case runtimes, clique-like things are most definitely one of those |
04:51:11 | bramm: | Also there are ways of avoiding fixing the constants by allowing several different ones and counting them as different amounts of work |
04:51:33 | tromp_: | i only want to do that with graph size |
04:51:45 | tromp_: | see section on dynamic sizing |
04:51:51 | gmaxwell: | bramm: No, actually I'm saying that based on your continued disregard for the substantial expertise built up over years by people expirenced in this space; and by your arguments which form a pattern similar to that used to justify many broken cryptographic security arguments in the past. |
04:52:51 | bramm: | gmaxwell, I'm not disregarding anyone's expertise, I just hadn't had a conversation with anyone who could tell me about the current state of the art of any of this stuff until today |
04:53:38 | bramm: | Its not like I haven't made multiple posts about my thoughts asking for input. I've gotten diddly for response |
04:53:49 | gmaxwell: | Saying cycle finding is a good problem is one thing; that does not actually end the discussion for it making a good POW in practice. Maybe it does, maybe it doesn't; you can build a good POW out of tasks that aren't 'good' too. |
04:54:39 | gmaxwell: | bramm: well the only post I saw of yours about cryptocurrency stuff prior to today was a remarkably poorly researched "vulnerability" in bitcoin that didn't exist; and which would have been quickly pointed out to you on bitcoin talk though you dismiss it as being full of fools. :) |
04:55:05 | zooko: | Something that would be great for ending certain discussions would be a reduction (also known as a security proof) showing that if you can crack cuckoo then you can use the cuckoo-cracker to find cycles in random graphs. |
04:55:28 | bramm: | gmaxwell, In that post I directly say that it might already be in there, I was posting it mostly as part of my process analyzing such things and to explain the reasoning behind it to people who don't know. Nothing I said in that post is wrong |
04:55:31 | phantomcircuit: | gmaxwell, being fair, it *is* full of fools |
04:55:38 | gmaxwell: | zooko: well reduction arguments ... also have known limitations. I'm not really sure that one would help here (though it would be nice to have) |
04:56:20 | tromp_: | zooko, it would also be great if you could prove that any amount of memory was needed to solve cuckoo with any particular efficiency. but i dont expect we'll ever be able to prove such tthings |
04:56:38 | bramm: | gmaxwell, You are far more knowledgeable and sensible that the vast majority of people I've spoken to who claim to be bit coin experts (and are really fanboys) although you can be a bit of a jerk :-P |
04:56:59 | gmaxwell: | bramm: Your post wasted about a hour of my time in aggregate dealing with people who were concerned about the 'flaw' you found in bitcoin. So, no offsense, but I think I'll reach my own conclusions about how wrong your post was. |
04:57:36 | gmaxwell: | Sorry if I've been too rude. uh. I had the notion in my head that you were not likely to take offense. I'll take more care. I do not wish to offend you. |
04:57:47 | tromp_: | cuckoo is the sort of pow that can at best get a reputation of hardness from prolonged widespread use and failure to find big efficiency gains |
04:58:15 | bramm: | gmaxwell, Sorry about that, I thought saying that the problem might already be fixed in the very first sentence and giving exactly the trivial fix which is already in there at the end and explaining just how easy it would be to stop the attack if it wasn't already fixed in the body of the post would stop people from flying off half-cocked |
04:58:32 | tromp_: | somewhat like crypto primiteves gaining a reputation of being secure |
04:59:02 | gmaxwell: | tromp_: Well I'm not sure if there is really a better approach than that. But it's not great, in that no one is required to publish their optimizations. |
04:59:28 | gmaxwell: | There exist non-trivial unpublished optimizations to bitcoin's work function. |
04:59:45 | tromp_: | yes, just like no one is required to publish their RSA and SHA256 breaks:) |
04:59:49 | bramm: | gmaxwell, I don't take offense easily, but a few times I've had to check myself from getting mad at you |
05:00:21 | bramm: | If you ask people in crypto they're actually much more skeeved by RSA than lots of other things |
05:00:35 | bramm: | Mostly because its encoding issues are such a nightmare |
05:01:23 | gmaxwell: | bramm: ha. sorry, I'll turn it down a notch then. I'm certantly glad to have you around here in any case... and indeed most bitcoin "experts" are really just enthusiasts... one of the downsides of how surprisingly accessible the basics of the technology here are. |
05:01:49 | gmaxwell: | You mean to suggest someone might create a vulnerability in their RSA padding scheme?!? |
05:01:59 | tromp_: | gmaxwell: it seems that for a pow to get some notion of provable security, it has to be related to random oracles such as scrypt did |
05:02:18 | phantomcircuit: | gmaxwell, ha |
05:02:25 | bramm: | But yes, it would be nice to have a reduction for the PoW in cuckoo, but there are some good theoretical reasons to trust max-clique, I'll dig some up now |
05:02:41 | zooko: | One thing that I think is under-appreciated about security reductions is that they can be valuable *components* of a larger picture. |
05:02:46 | gmaxwell: | tromp_: academic cryptographers don't give RO enough love these days. A statement about POW security on the RO model is interesting. |
05:03:02 | gmaxwell: | (regardless of what academic cryptographers think) |
05:03:08 | zooko: | I think that the phrase "provable security" contributes to overlooking that, by implicitly focusing on the idea that you can prove the whole thing. |
05:03:23 | bramm: | gmaxwell, I'm guessing you're being sarcastic about RSA padding. That high order byte freaking sucks. |
05:03:37 | zooko: | For example, I'd like to understand more precisely what relationship there is, if any, between a well-studied problem like cycle-finding in random graphs, and cuckoo. |
05:03:45 | bramm: | What are you using RO as an acronym for? |
05:04:00 | gmaxwell: | zooko: yea, really the words "provable security" should be banned. it's almost a joke how many systems with security proofs are vunerable, ... usually because to get a security proof something stupid needs to be done that makes the proof wrong or worthless... but taken as what they are worth and no more they can be useful. |
05:04:02 | tromp_: | random oracle |
05:04:15 | bramm: | zooko, all that cuckoo is doing is finding cycles in random graphs |
05:05:02 | zooko: | bramm: So, inasmuch as that is true, then we ought to be able to write a reduction (don't call it a "security proof") in which we take as input a cuckoo-pow-cracker, and we emit as output a cycle-finder-in-random-graphs. |
05:05:11 | zooko: | But I don't think it is precisely true. |
05:05:38 | zooko: | For starters, with that thing that I have been bugging tromp about for a while now, that cuckoo's graphs are not necessarily random. |
05:05:38 | phantomcircuit: | gmaxwell, the security of most crypto systems of course being based on things which are believed to be difficult, but for which nobody has actually proven them to be difficult |
05:05:42 | op_null: | gmaxwell: by "unpublished" do you mean some people know about them, or that they have made it into hardware? I'm not interested in what they are as you're obviously not shouting them out. |
05:06:13 | gmaxwell: | bramm: yes.. But really bad crypto is easy, even in a nice cofactor-1 EC group like ours; see e.g. the first 'message encryption' thing electrum had. It didn't quite manage to leak the secret key, but decryption acted as an oracle to decrypt other messages... and there were even some messages which its encryption process would corrupt without detection even absent an attacker. |
05:06:14 | zooko: | So, if I have a cuckoo-pow-cracker that works by exploiting patterns in siphash, then you can't use that to find cycles in random graphs, therefore there can be no such security reduction as the one I wished for above. |
05:06:15 | bramm: | zooko, not quite, no, but the algorithms for finding cliques are remarkably weak - there's a trivial approach, and then it hits a hard impassable wall which there simply aren't any optimizations on |
05:07:06 | zooko: | So now we have to expand our security reduction a little, and say, okay here's a machine that eats cuckoo-pow-crackers and shits out *either* cycle-finders-in-random-graphs *or* siphash-pattern-finders. |
05:07:20 | zooko: | See how security reductions can actually be useful tools for cognition and argument? |
05:07:26 | zooko: | Not that I've ever actually written one. |
05:07:57 | gmaxwell: | op_null: Sometimes I'm not clear on what I came up with, and what someone told to me, and what someone told to me in confidence... so the only safe thing to do is not comment. As I don't want to discourage people from telling me things by blabbing about them. But yes, there are optimizations which some people know about which AFAICT are not widely publically known. I dunno if any hardware is actually using them yet. |
05:08:07 | zooko: | Before I get too tired, let me just throw out the specific angle of attack that I came up with in my kitchen with Nathan Wilcox and amiller last week... |
05:09:23 | bramm: | zooko, that may be doable. My intuition boils down to saying that the patterns found in sip hash would have to be extraordinarily powerful, far beyond the security sip hash is assumed to have |
05:09:50 | bramm: | Although I don't know how one would do any such reduction, for the same reasons that cycle finding is hard, unfortunately |
05:09:51 | op_null: | gmaxwell: thanks, that's literally all I wanted to know. |
05:10:09 | zooko: | I've refrained from mentioning it until now, because I wanted to focus on the general class of possible issues, and I didn't want my *general* objection to be overlooked if this *particular* |
05:10:14 | zooko: | angle of attack doesn't pan out. |
05:11:20 | zooko: | So, my vague general idea is: I'm going to figure out how to generate candidate graphs in cuckoo which are related to one another in such a way that my work on one of them can benefit me on the next one. |
05:11:44 | zooko: | Possibly sequentially, violating progress-free-ness, or possibly in parallel, allowing me to get > O(N) cuckoo-pow-work out of O(N) RAM. |
05:12:17 | gmaxwell: | zooko: but see, even that much isn't enough for security in a POW context. Say you can run one operation of the cuckoo finder and with 0.1% precision if you will find a solution 100x earlier than normal. Such a black box would not break your normal reduction arguments, but it would allow you to grind insances of the problem and get a speedup. |
05:12:21 | zooko: | For example, if I generate many graphs which share some common subgraphs, then when I do the pruning, or other work, on that common subgraph, that might help me make progress on all the graphs. |
05:13:02 | zooko: | gmaxwell: um, doesn't that just mean the tightness of the reduction matters in practice for us? |
05:14:11 | bramm: | zooko, that wouldn't really help you, even being able to recalculate on half the edges in advance wouldn't speed things up all that much |
05:14:13 | zooko: | Well, it has been a very nice evening of -wizards. Thanks. Must sleep now. Took melatonin pill an hour ago and it is kicking in. |
05:14:19 | tromp_: | identifying common subgraphs is more work than just looking for the cycle:( |
05:14:45 | tromp_: | unless the cycle is the common subgraph?! |
05:14:47 | gmaxwell: | zooko: well normally you ignore constant factors and it may be only a constant factor difference, but a constant factor can all you need to make it so only the person knowing the optimization can mine economically. |
05:14:49 | zooko: | No, no, silly, not *identifying* common subgraphs. |
05:14:54 | zooko: | Generating ... |
05:15:16 | zooko: | Assume, counterfactually, for a moment, that cuckoo-pow lets you pick any graph you want for your "random" graph. |
05:15:31 | zooko: | Then, you would just pick one that had a cycle in it that you knew to look for, right? |
05:15:38 | zooko: | Okay, that was useless. Let me try agian. |
05:16:08 | gmaxwell: | A more concrete example of that ... There was a memory hard POW that I broke (the first one from the proshares people) which had some large 128MB state that it filled and updated as it ran. But I observed that it only very rarely read the updated states during its execution. The initilization could be completed in parallel... so it was trivial to make an approximation of it that was just as fast and used no memory at all.. but it ... |
05:16:14 | gmaxwell: | ... was wrong about 1% of the time. You could have had a nice security reduction saying there was no way to compute the function without the state or whatever.. and yet for POW purposes it was fine to substitute it with an approximation. |
05:16:17 | zooko: | Assume that you can generate graphs with certain control over them, but not enough cont5ol to just force a specific cycle. |
05:16:19 | gmaxwell: | sorry for the overlapping conversation. |
05:16:22 | zooko: | But, let's say you can for example |
05:16:38 | bramm: | zooko, the problem is that there are zillions of chains of length 21 everywhere, but their structure is so complicated that they can't be summarized, and only one is likely to lead to a cycle of length 42 |
05:16:48 | zooko: | have *compression* between many graphs, because of their common structure. |
05:17:10 | zooko: | You can then use your RAM to perform many cuckoo-pow's in parallel, effectively re-using the same RAM to work on multiple graphs at once, because of the inter-graph compression. |
05:17:18 | zooko: | sleep is taking me... |
05:17:36 | bramm: | Yeah, like i said, unlikely to help |
05:17:39 | zooko: | gmaxwell: what you said sounded super relevant and interesting but I was reading the other converastion and now I'm too tured. |
05:18:01 | gmaxwell: | zooko: we'll talk later. |
05:18:11 | tromp_: | it's getting too late for me too |
05:18:13 | op_null: | gmaxwell: wish I had the resources to decap chips. I'd love to be able to get some pictures of all the different died, from the huge 110nm asicminer chips to the tiny tiny spondoolies ones. |
05:18:16 | tromp_: | g'night folks |
05:18:29 | amiller: | gmaxwell, failing to account for approximations is a sign of a shitty pow formalism.. |
05:19:05 | op_null: | actually decapping them probably isn't the problem, that's acid and heat. getting decent pictures of the silicon would be the bit I can't do with any level of quality. |
05:19:22 | bramm: | Night Tromp... wait, are you in the USA now? |
05:19:32 | tromp_: | yes, since 2007 |
05:19:45 | bramm: | Ah, I didn't know that |
05:19:46 | zooko: | sweet come over to my house. |
05:20:12 | tromp_: | if i'm ever in the neighbourhood... |
05:20:13 | amiller: | nick szabo has a really nice essay in favor of more precise reductions than "polynomial time", i think the only reason that's not in common is that the general equivalence of "reasonable" computing machines is that coarse, hence anything more precise has to be tuned to a particular computing systems like circuits or turing machines |
05:20:42 | andytoshi: | today may have been the most productive and complete discussion of pow we've had ever had here in one go |
05:20:42 | amiller: | which is actually really relevant to pow, because we are interested in preventing asics given some kinds of assumptions about the design space of those |
05:21:03 | bramm: | amiller, the polynomial hierarchy has been studied, but particular exponents are of course nowhere near as robust a class as polytime itself |
05:21:53 | bramm: | andytoshi, Yeah is seems like the conversation actually got somewhere |
05:22:11 | kanzure: | op_null: there's a few people doing chip decapping in this community's fringes |
05:22:24 | amiller: | bramm, well, welcome :) dunno what took you so long to get here |
05:22:27 | kanzure: | op_null: a few people have open offers if you send them chips and some peroxide they will happily run it under a SEM for you |
05:22:46 | bramm: | amiller, I wasn't working on anything related |
05:22:51 | andytoshi: | bramm: a big part of it was that when things got heated everyone acknowledged and calmed down .. it doesn't happen too often, but tempers flare up here and sometimes it's just a shitshow |
05:22:58 | andytoshi: | oh and let me add my voice to the "welcome"s |
05:23:03 | kanzure: | op_null: http://siliconpr0n.org/archive/doku.php?id=digshadow |
05:23:09 | op_null: | kanzure: heh, sending nitric acid in the mail seems like a good way to have people knocking on my door and telling me kindly to stop |
05:23:11 | gmaxwell: | amiller: sure, and then you run into e.g. DJB's rant that the quantum collisions finding algorithims are not really any faster than the clasical ones. (they claim to be faster but they're not counting hardware parallelism or communications costs)... All algorithims that complete within the life of the universe are O(1) if your computing devices are extra universes. :P |
05:23:40 | op_null: | kanzure: ooo. |
05:23:49 | bramm: | I tend to avoid working on things where no clear promises can be made: like, I move data from point A to point B because you can strongly say that the data was in fact moved, but I don't like claiming that a secret has been kept or data is stored because those are such fuzzy promises into the indefinite future |
05:25:10 | bramm: | Likewise I view claiming that a bit coin is worth something as a completely ludicrous claim to make, but I do believe that bit coin can strongly make the claim that a particular transaction has been executed in an extremely distributed database without a double spend, so I find that an interesting problem to work on |
05:25:47 | op_null: | kanzure: I'm pretty tempted to remove a few from the end of a bitfury chain and see if anybody wants to decap them for me. I only have a couple of types though sadly. asicminer, gridseed and bitfury/antminer (same thing). |
05:25:49 | kanzure: | instead of promising data storage you can offer locked/microchannel-style funds for correct data transmission in the future or something (i agree that it is hard or wrong to try to prove data storage) |
05:26:01 | bramm: | But you'll never hear me cheering increases in the value of a crypto currency or freaking out about its value falling, because I don't view that as important or interesting |
05:26:04 | andytoshi: | i'm not sure anyone here can justify bitcoin's value, but i think the double-spend protection depends on it ;) |
05:26:24 | andytoshi: | because the idea is that the "real" history is the only one whose coins have value |
05:26:31 | kanzure: | op_null: email me with a short proposal and i can send it along (kanzure@gmail.com) |
05:26:48 | bramm: | andytoshi, you mean bit coin's value depends on not having double spending, yes that's most definitely true |
05:27:17 | andytoshi: | bramm: hmmm, i meant its not having double-spending depends on its value, otherwise there is no incentive for miners to agree on a canonical history |
05:27:23 | bramm: | Avoiding double spending is in fact the only claim which bit coin's core actually claims to make |
05:27:32 | andytoshi: | but yeah, there is an opposite implication. weird |
05:27:41 | op_null: | kanzure: I'll have a shot with someone locally first, if they can't I'll shoot you an email about it. |
05:28:00 | bramm: | It does appear that a bit coin style system critically depends on having mining, unfortunately |
05:28:44 | phantomcircuit: | bramm, i might have missed it, but do you have a different system to prevent double spends? |
05:28:45 | moa: | it's not unfortunate it seems inevitable |
05:28:46 | andytoshi: | yeah, seems that way.. |
05:29:06 | gmaxwell: | bramm: nonthing in the bitcoin system assigns any value to the coins ... the users of it may (and, seemingly, do) but thats largely outside of the scope of the system itself. (and also an area I consider fairly boring, filled with hype and noise) |
05:29:37 | bramm: | phantomcircuit, no I don't, at least not one which is as anywhere near as distributed and robust |
05:29:47 | bramm: | gmaxwell, On that point we're in agreement |
05:29:48 | andytoshi: | but fwiw there is no inherent reason that we have to use the thermodynamic limit (eg the stuff we do understand is totally compatible with memory-hard pow, which is using the universe's bandwidth limit) |
05:30:04 | andytoshi: | i share skepticism on the economics, but don't have anything to add to what's been said there |
05:30:08 | moa: | physics |
05:30:15 | gmaxwell: | We normally direct people who want to talk about market prices to other channels; there are some cases where the fact that its considered valueable at all matters though; since it's somewhat essential to an economically motivated security argument. |
05:30:59 | phantomcircuit: | gmaxwell, somewhat? :P |
05:31:04 | bramm: | Yes but the attachment is very weak: If it was worth less then less work would be done |
05:31:18 | moa: | it's that damned messy real world imposing reality all the time ... |
05:32:03 | gmaxwell: | phantomcircuit: well you can still argue bitcoin is secure completely without value; if you argue it in a model where you take as an axiom that the majority of hashpower is 'honest' (conforms with the system's rules). |
05:32:38 | gmaxwell: | now, how you'd justify that axiom is another question. But then again, lots of things happen that seem to have little justification beyond inertia. :P |
05:32:48 | phantomcircuit: | heh |
05:33:07 | phantomcircuit: | it would be interesting if the price drops sub $200 to see what happens |
05:33:21 | phantomcircuit: | (that is ~ the break even point for most miners at $0.10/kWh) |
05:33:23 | bramm: | Even if you have a majority of the hash power your potential attacks are fairly weak - basically you can retroactively reject transactions, but you can't change them |
05:33:30 | op_null: | the latency in mining makes that harder to see the effects |
05:34:08 | bramm: | That's my big ongoing beef with side chains - an attack with enough power can not only reject a release but redirect it, but that's a whole long discussion |
05:34:09 | op_null: | ie, people might mine at a loss rather than breaking their operation down and just hoping for it to rise back up. a sudden increase in price has a high latency where you can't spin up more hardware easily. |
05:34:11 | gmaxwell: | bramm: up to a limit, past 100 blocks reorged and you can make new coins yours and not other people's and that effectively replaces transactions. |
05:34:40 | bramm: | gmaxwell, I don't view retroactively changing mining as a particularly meaningful attack |
05:34:41 | kanzure: | you can change your own transactions |
05:35:01 | kanzure: | (in deep reorgs) |
05:35:02 | gmaxwell: | bramm: really? at the limit it changes the ownership of every single coin. |
05:35:07 | kanzure: | (in deep reorgs that you generate) |
05:35:15 | op_null: | phantomcircuit: where does your farm sit with regard to clocks? do you underclock, overclock, hardware dependant? |
05:35:21 | kanzure: | (in deep reorgs that you generate that change transactions you sent earlier) |
05:35:41 | phantomcircuit: | op_null, any hardware which can be overclocked is not properly spec'd |
05:35:54 | moa: | how many full nodes on the network at any oone time? ... still around 7500? |
05:35:55 | bramm: | Probably the strongest attack is that you could double-spend by spending a coin, undoing the transaction, and then re-spending it. You can't fraudulently change other peoples's transactions |
05:36:12 | kanzure: | yes you can |
05:36:12 | kanzure: | wtf man |
05:36:16 | gmaxwell: | bramm: likewise SPV clients (which constitute the majority of end users wallets) will also happliy be lied to about you spending other people's coins, or coins that never existed. But indeed, it's not precisely the same security model as a plain full node (well ignoring the deep reorgs). |
05:36:19 | phantomcircuit: | op_null, with a lot of systems underclocking allows for only slight improvements to efficiency |
05:36:22 | kanzure: | you can totally mutate various transactions like crazy |
05:36:45 | kanzure: | especially if you are interested in invalidating large trees of related transactions |
05:37:07 | bramm: | gmaxwell, I view changing transactions as a problem. People not getting the returns they hoped for on mining, for whatever reason, is their problem |
05:37:19 | op_null: | phantomcircuit: I doubt you run any, but underclocking bitmain/bitfury hardware is deeply impressive. they seem to have run them pretty hard at stock, when you bring them down they run basically cold and at ridiculous efficiency. |
05:37:36 | bramm: | kanzure, How can you do that without being able to forge signatures? Aside from being able to reject transactions I mean |
05:37:41 | gmaxwell: | bramm: oh but its not those people I'm saying lose their coins; though they do to... it's also every ancestor transaction after them. |
05:37:48 | gmaxwell: | And _every_ coin was ultimately mined. |
05:37:49 | kanzure: | bramm: signatures don't actually sign for the entire transaction, surprise :( |
05:37:49 | moa: | how practical would it be to spin up ~10,000 nodes malicious nodes that intentionally avoid connecting to each other but attempt to isolate, blockTX and corrupt comms between the 'honest' nodes? |
05:38:05 | bramm: | gmaxwell, I haven't really thought much about spv clients, aside from wishing that the merkle roots were there for them to operate off of |
05:38:22 | phantomcircuit: | op_null, iirc that's only because they dont have effective dynamic voltage control |
05:38:27 | gmaxwell: | bramm: kanzure is talking about transaction malleability. |
05:38:29 | andytoshi: | bramm: https://github.com/bitcoin/bips/blob/master/bip-0062.mediawiki has a list of ways you can change txes without changing the signature |
05:39:00 | andytoshi: | bramm: note that you can't redirect coins this way (except if the transactions were deliberately crafted to allow this) but you can break chains of transactiosn |
05:39:16 | phantomcircuit: | op_null, minimum vdd changes with die temperature, which itself fluctuates |
05:39:19 | kanzure: | (transaction malleability has been at the top of my mind today because i have been implementing relevant software that has to handle deep reorgs and mutated transactions) |
05:39:30 | gmaxwell: | bramm: also wrt your wish, what you're thinking of there may be less of an improvement than you're think it is, simply because the privacy model of bitcoin strongly assumes that your wallet is private to you. The blockchain itself does not know which mixes of addresses share a wallet. |
05:39:37 | bramm: | gmaxwell, If you had substantially more than 50% and wanted to really fuck shit up and take the time to do it, yeah, a lot of problems come up. It isn't a fine white line at 50% though, some problems start below that and don't become severe until well above |
05:39:43 | phantomcircuit: | i would expect a lot of equipment to be turned off before it was underclocked/undervolted |
05:39:52 | kanzure: | ( https://en.bitcoin.it/wiki/Transaction_Malleability ) |
05:40:12 | op_null: | phantomcircuit: they don't have any at all I don't think. |
05:40:12 | bramm: | gmaxwell, I view privacy as a largely accidental feature of bit coin |
05:40:22 | gmaxwell: | bramm: see also the many forum posts where I make exactly that point only to be mobbed by fanboys who obsess over "50%" as a magical value. :) |
05:40:25 | kanzure: | "While transactions are signed, the signature does not currently cover all the data in a transaction that is hashed to create the transaction hash. Thus while uncommon it is possible for a node on the network to change a transaction you send in such a way that the hash is invalidated. Note that this just changes the hash, the output of the transaction remains the same and the bitcoins will go to their intended recipient. However this does ... |
05:40:31 | kanzure: | ... mean that, for instance, it is not safe to accept a chain of unconfirmed transactions under any circumstance because the later transactions will depend on the hashes of the previous transactions, and those hashes can be changed until they are confirmed in a block. (and potentially even after a confirmation if the block chain is reorganized) In addition clients must always actively scan for transactions to them; assuming a txout exists ... |
05:40:37 | kanzure: | ... because the client created it previously is unsafe." |
05:40:39 | op_null: | bramm: there is almost no privacy in Bitcoin at all. |
05:41:03 | phantomcircuit: | op_null, right |
05:41:31 | phantomcircuit: | op_null, it's my understanding that you have to manually undervolt them when you underclock them |
05:41:38 | phantomcircuit: | i dont see that being done widely |
05:41:39 | gmaxwell: | bramm: Section 10 of bitcoin.pdf ... Not that its great, but its stronger than you may credit it for, and the performance optimizations are largely only avilable if it's broken completely. Doesn't seem like a good tradeoff considering how fast spv clients already are. |
05:41:54 | op_null: | phantomcircuit: that's right. |
05:42:03 | bramm: | andytoshi, I'm afraid that reading that link will give me an aneurysm |
05:42:21 | kanzure: | maintaining multiple simultaneous aneurysms is the name of the game |
05:42:37 | kanzure: | my brain tumor receives #bitcoin-wizards all the time |
05:43:03 | bramm: | (I have opened it in a new tab though, to look at later) |
05:43:04 | gmaxwell: | kanzure: in any case, I don't think that the malleability is that great an example in this case because it's primarily a DOS attack. I really do think that pointing out that redirecting coinbases ultimately redirects all the coins, not juts miner's coins.. as a stronger example of how badly a high hashpower attacker can break things. |
05:43:23 | kanzure: | fair enough |
05:43:32 | kanzure: | yeah i am over-compensating for the other cases |
05:43:40 | kanzure: | because i have had to deal with them more recently |
05:43:59 | op_null: | the more you look into it really the less private Bitcoin transactions are. they are only private in that people don't bother to look at the transactions. people leak a lot about their addresses, transactions. |
05:44:09 | op_null: | if a motivated individual spent some time doing some aggressive scraping of Bitcoin related content they would rapidly gain a picture of a large number of address ownerships. |
05:44:22 | bramm: | Yes it's the later transactions getting undone which is the real problem, not the initial mining being undone |
05:44:41 | gmaxwell: | Some of the malleability is an intentional and useful feature, e.g. that you can author transactions for multiple people to supply input coins to non-interactively. ... some of it is a design bug. Though mostly it results in DOS attacks against conventional uses at worst. Unconvientional uses are another matter. |
05:44:57 | andytoshi: | bramm: :P i almost posted one about anonymity/privacy, then remembered how much shit we've thrown at you all day |
05:45:16 | op_null: | a website like blockchain.info has even more. they have the wallet information of every user, and the information of all the peeople who search for their own transactions and addresses to see how they are going. they no doubt know who owns almost all the BTC in the network, just from their logs. |
05:45:21 | bramm: | There was a colloquium at Stanford the other day on figuring out the IP address of someone who did a bit coin transaction. It's remarkably easy, and attempting to stop attackers from being able to DOS peers by hogging all their peer slots makes the lack of anonymity problem worse |
05:45:56 | gmaxwell: | bramm: right, yea, I don't give a crap about miners in that context, they can look out for themselves. But if their coins are lost, all decendant coins are lost. I wasted an unfortunate amount of time trying to figure out how to socialize double spending losses (e.g. by turning them into inflation) but never was able to come up with a scheme where it wasn't just an instant incentive for miners to form themselves. :) |
05:46:01 | kanzure: | bramm: that's why you should just give your recipient the raw signed bitcoin transaction itself |
05:46:35 | op_null: | bramm: that sounds like mostly bullshit to me. the layout of the network doesn't permit that sort of thing happening. |
05:47:09 | bramm: | op_null, Peers announcing who they're talking to via gossip leaks a LOT of information |
05:47:30 | op_null: | bramm: that doesn't tell you were a transaction has come from though. |
05:47:45 | kanzure: | if you are all of their peer slots, you know when they send you something that you never sent them |
05:48:07 | bramm: | This particular work was by Ivan Pustogarov |
05:48:15 | gmaxwell: | bramm: sometimes research suffers from not knowing what high security users are doing in practice. (in particular, high securit applications of bitcoin do not announce transactions from persistantly on nodes.) |
05:48:16 | bramm: | op_null, it tells you who introduced it to the network |
05:48:43 | op_null: | bramm: only if you own every one of their connections, which is extremely unlikely. |
05:48:55 | kanzure: | no you don't need to be every single one |
05:49:00 | kanzure: | (welcome to stats) |
05:49:03 | bramm: | gmaxwell, Well yeah there are simple countermeasures to that particular one, but the point is that there's a lot more information leaking than most people might think |
05:49:19 | gmaxwell: | op_null: you should read the paper, if you haven't. (there is a paper on this) |
05:49:39 | bramm: | op_null, You don't have to know all their connections, you just see who announces to you that they have a transaction first and triangulate who they're all connected to |
05:49:44 | kanzure: | link? |
05:49:49 | op_null: | ditto. |
05:50:13 | op_null: | I saw one a while back which was mostly nonsense, by the sounds of it this is something different. |
05:50:17 | bramm: | https://crypto.stanford.edu/seclab/sem-14-15/pustogarov.html |
05:50:34 | bramm: | The really terrifying thing is how easy it is to bust Tor |
05:50:38 | kanzure: | http://diyhpl.us/~bryan/papers2/bitcoin/Deanonymisation%20of%20clients%20in%20Bitcoin%20P2P%20network.pdf |
05:52:08 | bramm: | Although of course everybody involved with core Tor development knew from the beginning how inherently fragile the whole thing is. It was always a 'hold your nose and accept the threat model' bow to practicality. |
05:52:25 | gmaxwell: | bramm: sure. But you must admit that narrow leaks to active attackers who are connected to you or your neighbors is quite a bit different than a largely unforgable highly replicated cryptographic ledger publishing information. |
05:52:47 | gmaxwell: | The Tor website is very frank about the limits; the users are largely happy to ignore the caution. |
05:53:30 | gmaxwell: | Likewise, the bitcoin tech community is very frank... worse for us becaues we have fanboys who give outright misinformation, and trying to correct them is pointless. |
05:53:46 | bramm: | gmaxwell, Like I said, I view anonymity features of bit coin as accidental and not central |
05:54:15 | op_null: | gmaxwell: haha. like if anybody tries attacking the network with massive amounts of hashpower we'll just ban them from the network :P |
05:54:21 | midnightmagic: | Worse is trying to give normal people enough information to recognise that the people spreading misinformation, are spreading misinformation, and as a subset of that, where to find more-accurate data. |
05:54:32 | op_null: | gmaxwell: that quote still hurts. |
05:54:33 | bramm: | Yes, there's definitely a pattern of central devs being far saner than fanboys, I've found a common pattern of being ready to be mad at people for what I thought were their views only to find out that their views were either horribly misrepresented or far more nuanced or both |
05:54:46 | gmaxwell: | bramm: it's not accidental. It's a day one part design, and also pretty important... since fungibility risk is a big problem for any money. A loss of fungibility can make the decenteralization pointless if you find yourself needing to consult a centeralized taint registry to find out if a coin is one you want to accept. |
05:55:37 | gmaxwell: | Also few parties are really interested in using a money which make their every economic transaction visible to the competition, theieves, or personal enemies. |
05:55:56 | bramm: | gmaxwell, anonymity can probably be brought on much better at the access point than in the middle of the thing. This is a common theme in anonymity: You much give people pseudonyms, and much be very clear about what the boundary of that pseudonym is |
05:56:26 | kanzure: | after a certain point of using a pseudonym you have leaked too many bits anyway |
05:56:33 | midnightmagic: | A good signal is information about bitcoin which marginalizes or minimalizes actual cryptographer or bitcoin developer (in most of the senses) works. |
05:56:47 | kanzure: | (in the stye of http://www.gwern.net/Death%20Note%20Anonymity ) |
05:56:51 | kanzure: | *style |
05:57:02 | bramm: | The difficulty of making such claims is why I don't work on anonymity systems. Although Tor did start with the digital equivalent of me scribbling some stuff on a napkin |
05:57:13 | gmaxwell: | bramm: I think you've got that backwards. If a system is perfectly fungible it's easy to layer on compromises on top of it (e.g. knowing who you're transacting with). The other way around not at all. Most of the information leak in bitcoin is because the pattery of spending interlinks pseudonymous identities. This is especially so in that people are trading scarce assets... |
05:57:15 | midnightmagic: | reconsider linking to that site, kanzure |
05:57:26 | kanzure: | haha |
05:57:31 | kanzure: | yeah, i should.. |
05:57:59 | bramm: | gmaxwell, Not sure what you're saying, the block chain of course leaks practically everything except the association of keys to real world identities |
05:58:02 | gmaxwell: | s/pattery/pattern/ |
05:58:07 | op_null: | gmaxwell: bramm: ah, yep, this paper references the one I was thinking of which identified listening full nodes as the sources of transactions. |
05:59:19 | midnightmagic: | neglecting the difficulty of associating node entities on multiple networks? |
05:59:25 | op_null: | bramm: that pairing is easy to get though. going from identity > address is ridiculously easy. once you've found a pool of addresses you can isolate the boundaries, or what could be boundaries, but looking at how the transactions are formed. different software makes different transaction types. |
05:59:53 | op_null: | s/but/by |
06:00:17 | phantomcircuit: | bramm, there are practical schemes which would correct that |
06:00:27 | phantomcircuit: | coinjoin combined with standard sized outputs for example |
06:00:51 | gmaxwell: | bramm: links of transactions among themselves leak even that if the system doesn't have privacy. Say, for example, that you said privacy wasn't really the job of bitcoin software. Great, now how could you ever hope to hide your salary from your land lord, or the costs of supplies to your customers? If someone wanted to stalk you, they could just keep paying you a penny every day and see where the funds end up to learn about your ... |
06:00:57 | gmaxwell: | ... identities. So well written bitcoin software should make the pseudonymous identities meaningful by avoiding linking them (as a basic, table stakes feature). If you're trying to optimize client sync by forcing linkage or making privacy costly (people often don't realize they need privacy until its too late)... then I think thats not a good tradeoff. |
06:00:57 | phantomcircuit: | at the extreme blocks could contain the coinbase and a single coinjoin transaction |
06:02:02 | bramm: | gmaxwell, I think you're referring to slip-ups which I would view as so amateurish as to be unthinkable. A lot of in the wild protocols have really surprised me. |
06:02:11 | gmaxwell: | bramm: good wallet software (anything with a positive score for this on bitcoin.org) doesn't encourage address reuses and avoids needlessly linking up histories. Better practices are possible but its an evolving area, and one thats hurt by fanboys exagerating the current level of privacy... reducing consumer demand for more. |
06:03:48 | bramm: | True, you can try to make subsequent transactions only come out of one incoming previous payment. That doesn't always work though, and makes having a database of everything require more space |
06:04:50 | bramm: | Also it's really easy to screw up. If you pay using whatever loose change is convenient then your various sources of loose change are likely to all wind up going to every possible output you have, which leaks a lot more than if you simply had two separate accounts with clear balances and never mixed them |
06:05:44 | op_null: | small defined amounts are probably not what you want either though |
06:05:56 | bramm: | And it of course won't work anywhere near as well as a delinking service, and those should really use standard quantized values to avoid obvious linkages |
06:06:09 | gmaxwell: | There are other reasons why payments need to be seperate (replay attacks; or at least solved via something that has just as much overhead as making them seperate). Existing wallets already do have the basic pro-privacy features. Obviously total seperation is better but has a cost to the user which is quite considerable including needing to know in advance what seperation they need, and not working when the fund flows are imbbalanced. |
06:06:10 | op_null: | delinking service? well. no. |
06:06:32 | gmaxwell: | bramm: also, linkage in bitcoin is less evidence than you might think it is.. due to coinjoin. |
06:06:34 | bramm: | If I pay $1,528.32 to one person to delink it and they give $1,528.32 to some other account, people can probably guess who's who |
06:07:36 | gmaxwell: | (also, any explicit 'service' of the classical centeralized source has costs which are unacceptable to the point of uselessness, alas... but fortunately we have tools that don't require centeralized services) |
06:07:44 | op_null: | bramm: naturally. you can do a neat attack on blockchain.info's "shared coin" in which you pick the inputs and outputs, and match the pairs. |
06:08:02 | op_null: | no sorry, "shared send", "shared coin" is a different service. |
06:08:14 | bramm: | gmaxwell, Of course conjoin is the exact opposite of delinking, so the relative merits aren't all that obvious |
06:09:15 | bramm: | argh, xchat autocorrects coinjoin to conjoin |
06:09:24 | gmaxwell: | bramm: It means that linkage doesn't mean what you might think it means. At the limit, linking is the same as delinking. :P Complete anonymity set. |
06:09:49 | gmaxwell: | hah well coinjoin is a play on conjoin, so fair enough. |
06:10:15 | op_null: | gmaxwell: I feat people will expose themselves by leaving different fingerprints on each side of the coinjoin transaction. that's a very real risk, and it goes beyond compressed/uncompressed keys. |
06:10:32 | gmaxwell: | Coinswap ( https://bitcointalk.org/index.php?topic=321228.0 ) is the disjoint version... but it's less interesting due to overheads, IMO. |
06:10:55 | bramm: | What's the usual algorithm for putting together a payment out of loose change? |
06:11:02 | kanzure: | "coin selection" |
06:11:23 | kanzure: | https://github.com/bitcoin/bitcoin/blob/33d5ee683085fe5cbb6fc6ea87d45c5f52882232/src/wallet.cpp#L1232 |
06:12:00 | op_null: | depends on the wallet software. some of it like bitcoin core is pretty random, some like Breadwallet seem to be fairly insane (they spend more outputs than they really should for no real reason, as far as I can see) |
06:12:01 | gmaxwell: | bramm: in bitcoin core we use a knapsack solver, that first tries to assemble a transaction from whole coins, then it tries to minimize the distinct number of inputs. |
06:12:47 | gmaxwell: | op_null: so I've seen a fair number of greenback wallet authors decide that a good algorithim is a greedy one that spends the smallest coins first... and produces awful transactions. Normally they learn the error of their ways quickly. :) |
06:13:04 | kanzure: | so waiting around forever for confirmations is not ideal? whaaat |
06:13:24 | bramm: | If you're minimizing the number of inputs, won't it work to repeatedly add the largest coin until you go over? |
06:14:02 | midnightmagic: | bramm: There are a few ways; one includes getting a list of fresh addresses from the recipient exactly as long as the number of inputs you want to spend on to them. Another is a coinjoin with split entity outputs; manually selecting coins via a patch to core; using intermediaries over time with randomized outputs; etc. I.e. there is no usual algorithm, because as gmaxwell said people are convinced bitcoin is anonymous |
06:14:03 | kanzure: | i thin he means picking solutions that have a tending-to-be-minimum number of inputs |
06:14:08 | midnightmagic: | already. |
06:14:12 | kanzure: | *think |
06:14:15 | gmaxwell: | No, because the problem isn't convex like that. The greedy solution isn't optimal. |
06:14:49 | gmaxwell: | bitcoin core's solver adds random subsets and prunes them and retries a number of times to get the best run it comes up with. |
06:14:53 | op_null: | gmaxwell: I'll have to look at it's code some day, breadwallet was doing something weirder than that. (example) I saw it spending 0.9 + 0.1 for a 0.05 output with the rest change, which seems pretty nuts. |
06:15:32 | gmaxwell: | op_null: that .. does seem a bit weird. Whats the source say its doing? |
06:15:51 | op_null: | gmaxwell: finding out. |
06:16:59 | gmaxwell: | There are a couple of dumb things bitcoin core is doing here, if anyone wants to hack on that for 0.11 feel free to nag me and I'd be glad to go over what I know and don't like. |
06:17:05 | midnightmagic: | bramm: One of my favourite, unfortunately only reserved for people with significant capital and lots of patience, is to buy lots of mining hardware, and then pay from coinbase payouts, which arguably could have the least identity associated with them. |
06:18:38 | phantomcircuit: | midnightmagic, i do that |
06:18:39 | gmaxwell: | midnightmagic: yea, mining is good for privacy ... but, mostly I don't find privacy techniques that are burdensom very interesting. Someone who knows they need privacy and are willing to work for it will get it (this is true of the world in general). In terms of protecting ordinary users from more conventional problems other approaches are needed. |
06:18:49 | phantomcircuit: | mostly for kicks though |
06:19:18 | midnightmagic: | phantomcircuit: me too it's my favourite |
06:20:11 | midnightmagic: | But I think I may have misunderstood Bram's question. I don't think he was asking for the usual ways to be private, just how core or the wallet software actually does it. |
06:20:38 | bramm: | Yes one could make the argument that certain coins are 'new' and hence don't leak information. Also one could argue that coins which haven't been used for a longer time leak less information. Or that you should try to use up a complete coin so that it will leak less information on later transactions |
06:20:59 | bramm: | midnightmagic, a combination of what can and should be done |
06:21:35 | op_null: | bramm: humans like whole numbers, which makes that harder. you're almost always left with a little change, or needing another output to make up the fee. |
06:22:51 | bramm: | A reasonable algorithm is to take the largest coin you have less than or equal to the amount you want to spend, then add the rest out of the smallest remaining coin larger than the remainder |
06:23:17 | gmaxwell: | What I'd like to have is better automatic link avoiding support, there is some now in the strategy but it makes some bad assumptions which are no longer true like users won't reuse addresses. And after that automatic coinjoining on spends so that analysis of what leakage remains is less reliable. |
06:23:20 | bramm: | That always use one or two coins out, and reduces the number of coins you have by one every time |
06:23:43 | bramm: | So you're minimizing conjoining, which one could argue is very good for anonymity |
06:23:53 | op_null: | gmaxwell: would be nice to have lazy transactions too. send this money to this address, sometime in the next 24 hours. no rush. |
06:24:00 | gmaxwell: | bramm: one must also consider the size of your change and how that influences your further transaction selection. e.g. minimizing change size has a negative property of producing a lot of 'dust' ... it can be preferable when change happens to be sized like your future spends. |
06:24:39 | bramm: | gmaxwell, the algorithm I just said always reduces your number of coins by one with every transaction |
06:25:19 | bramm: | But it does have the problem that the total number of coins everywhere doesn't go down |
06:26:09 | gmaxwell: | it also always links inputs even when it coudl be avoided. There was someone working on some different selection hurestics that created a simulator, on one of the bitcoin github issues. |
06:26:15 | bramm: | Actually no it can reduce by more: I didn't specify what to do when you want to spend 9 credits and you have a bunch of singles and a 10 |
06:26:19 | gmaxwell: | Haven't heard from him in a bit. |
06:26:49 | bramm: | It isn't clear what behavior you want. You can either slowly merge coins or do it never until you put all your spare change into one big transaction |
06:27:45 | gmaxwell: | I think ideal behavior has some knoweldge of what your future spending amounts look like (or a guess) and then splits or merges to move more torwards that distribution. |
06:28:17 | phantomcircuit: | which is hard to do, thus powers of two |
06:28:20 | gmaxwell: | If in some transaction you can be especially efficient constructing it one way (e.g. no change), then do that. |
06:29:12 | gmaxwell: | We certantly do not want schemes which cause unbounded utxo set growth... but fortunately ones that do are bad all around, bad for privacy, bad for tx fees, etc. |
06:29:37 | op_null: | gmaxwell: that's a nice touch. breadwallet rolls back the block chain on launch and rescans for it's keys just in case somebody gave false negatives. |
06:30:05 | gmaxwell: | op_null: how far back? |
06:30:22 | op_null: | wait, I misread that, it only does that if users request it. |
06:30:27 | gmaxwell: | ah okay. |
06:30:45 | gmaxwell: | op_null: I was going to suggest you contact them and find out if they've ever seen that attack. |
06:31:03 | kanzure: | re: simulations in a github issue, |
06:31:03 | kanzure: | https://github.com/bitcoin/bitcoin/pull/4906 |
06:31:11 | bramm: | If you're optimizing for dust you can keep all your coins as powers of two. That does a lot of coin merging though |
06:32:02 | bramm: | And you have an unpleasant decision to make if someone gave you a bunch of nickels |
06:32:36 | bramm: | But the general gist of what everybody is saying here amounts to is 'eh, we do something, maybe we could do better' |
06:33:43 | bramm: | Doing knapsack leaks all kinds of stuff. If I suspect person A and person B are the same, then even if they have a huge wallet I can give person A $112.32 and ask person B for $112.32 and see if they give me an exact coin |
06:34:17 | gmaxwell: | well, you can always ignore coins... though this can have a bad effect on the network. E.g. even though litecoin has less than 1/10th the number of utxo with >$0.01 in value, it has something like 100x more utxo in general, in part because they responded to penny flooding attacks by making wallets just ignore the inputs and never spend them... and as a result a 'lite'coin node is more resource intensive than a bitcoin node. |
06:35:04 | gmaxwell: | bramm: yes any kind of passive security e.g. avoiding gratitious linkage is vulnerable to confirmation attacks. |
06:35:48 | bramm: | There's a deep problem with transaction costs in general that whoever's accepting them gets paid but doesn't have to deal with the storage costs for everybody else later |
06:36:54 | gmaxwell: | Yep. well it's _not_ a deep problem if the system is limited enough that the costs are negligible. Agreed its a problem otherwise. One of the more complex schemes on the alt ideas page was a notion of prepaying fees. |
06:37:16 | gmaxwell: | I've also been a fan of utxo expiration (amiller too) but suggesting that can get you crucified around bitcoin-land. |
06:38:22 | gmaxwell: | The prepayment idea is cute, basically creation of the utxo reduces the maximum blocksize for the block that creates it... and increases the maximum blocksize for the block that spends it. ... by an amount specified in the utxo as the expected signature size. (and subject to some limits, which is where it starts getting complex) |
06:38:57 | bramm: | I can't seem to find a good web page on max clique. If I recall correctly, if half the edges are filled in then it's easy to find a clique of size n^.5 but extremely hard to find one even slightly larger and there's almost always one of that size |
06:39:08 | bramm: | almost always one of size 2*n^.5 I mean |
06:39:22 | bramm: | Apparently max-clique isn't sexy enough for people to like pontificating about it online |
06:39:46 | bramm: | What is a utxo? |
06:40:20 | gmaxwell: | unspent transaction output. The utxo set is the actual data a full node tracks for forward validating new blocks that come in. |
06:41:47 | gmaxwell: | a utxo entry maps {txid, output index} -> {version,height,coinbase_flag,value,script_pubkey} or logically equal. |
06:42:58 | bramm: | Yeah I assume that making small utxos time out would not be popular |
06:43:09 | bramm: | What does bit coin do to prevent ddos by small transaction? |
06:43:16 | lclc_bnc: | lclc_bnc is now known as lclc |
06:43:28 | op_null: | fees, and nodes won't relay floods of no-fee transactions. |
06:44:06 | gmaxwell: | and regardless of fees transactions which create very tiny 'dust' outputs are not relayed or mined by most nodes. This is 'policy' not a consensus rule, so nodes don't have to be consistent in it. |
06:44:31 | gmaxwell: | (so, e.g. it's fine for different versions of the software to just twiddle around these thresholds) |
06:45:05 | bramm: | What's to stop a miner from doing a DDOS with lots of small transactions? |
06:45:07 | gmaxwell: | also, adding to what op_null said... very low fees (configurable node local policy) are treated as zero for the purposes of e.g. the relay limiting and mining priority. |
06:45:48 | bramm: | Since a miner can put whatever they want into the chain, and it only takes one jerk to add a huge amount of stuff... |
06:46:04 | gmaxwell: | A miner? nothing at all. Except that he's recieving some ten grand in bitcoins and it probably isn't in his interest to debase their value by crapping on the system. He can only add 1mb of data at most. |
06:46:33 | bramm: | Do you mean there's a hard limit of 1mb on transactions for each block? |
06:46:46 | gmaxwell: | He takes some very small increase in orphaning risk by mining a big block full of txn that weren't previously relayed and validated... but that effect is probably fairly negligible. |
06:46:50 | gmaxwell: | bramm: absolutely. |
06:46:52 | op_null: | 1MB of crap that hasn't been seen by their peers will take a lot longer to validate than 1MB of previously seen transactions too. |
06:47:28 | bramm: | That would seem to be a hard limit which would have to be lifted at some transaction volume |
06:47:42 | gmaxwell: | bramm: technically a million bytes and the block header comes out of that too. |
06:47:53 | midnightmagic: | main.h:static const unsigned int MAX_BLOCK_SIZE = 1000000; |
06:48:02 | op_null: | it is literally a hard limit though. can't twiddle with it without a lot of bother. |
06:48:17 | bramm: | That could cause an interesting problem in the future |
06:48:30 | op_null: | well, a fork. |
06:48:51 | bramm: | How much forkage has there been in the past? |
06:49:03 | midnightmagic: | potentially; unless sidechains works out, in which case some forms of scaling can go into those |
06:49:16 | gmaxwell: | bramm: potentially, it's a protocol rule, not policy lifting it is like lifting the supply of bitcoins technically. Though presumably easier. I guess this explains some of your earlier incentive concerns wrt scaling. Those issues of a slippery slope that loses decenteralization can't sneak up on us because of the limits. |
06:49:57 | op_null: | depends on your definition. there was a hard fork early on which disabled a lot of dangerous opcodes, and the fork where BDB issues caused some nodes to stick and fork on blocks (but not all at the same time). there's been no others. |
06:50:17 | bramm: | How likely are side chains to actually happen? That would seem to be a much more immediate and potentially serious forage |
06:50:25 | gmaxwell: | bramm: subject to some slight definitional debate we've never had a hard fork. E.g. you can take a first release node of bitcoin and with a little gatewaying it _should_ sync up to the current tip. Should because bdb had non-determinstic behavior which would potentially make old nodes get stuck under some conditions. |
06:50:40 | midnightmagic: | bramm: It depends on what you call forking. The codebase itself has changed the way it interprets a valid block a few times, but much of what I would consider to be a forking change I typically get yelled down about. :-P |
06:50:45 | gmaxwell: | We've had many soft forks, which are backwards compatible with old software. |
06:51:22 | gmaxwell: | If you include the fact that 0.8 removed the non-determinstic behavior from prior versions and thus potentially is happy with chains old versions would non-determinstically reject; then we've had one hard fork. |
06:51:30 | bramm: | You mean the very first release of bit coin would accept the entire current history? |
06:52:02 | gmaxwell: | bramm: it should. At least if you feed it to to it in one go without reorgnizations, the bdb getting stuck mostly prevents reorgs. |
06:52:16 | gmaxwell: | I haven't done it for about a year. |
06:52:46 | rusty: | rusty has left #bitcoin-wizards |
06:52:55 | bramm: | What does it use now? bdb isn't exactly known for stability, for no apparent reason |
06:52:57 | gmaxwell: | You might grow a bit old while wating ... it syncs easily 100x slower than current code (maybe several hundred x) |
06:53:03 | op_null: | leveldb. |
06:53:09 | midnightmagic: | leveldb for the blockchain, bdb for wallet |
06:53:22 | bramm: | gmaxwell, Did it not pipeline requests from peers? |
06:53:29 | gmaxwell: | well more than leveldb, we use leveldb in a way that makes it less likely to cause any future issues than the way bdb was used. |
06:54:15 | midnightmagic: | .. how about: leveldb for the blockchain indices; raw block data for actual block storage; bdb for wallet? |
06:54:23 | bramm: | bdb is known for just plain getting garbled. Because key/value storage is apparently a problem which takes decades to debug :-P |
06:54:41 | op_null: | midnightmagic: * bdb for wallets for historical reasons but it's totally overkill now |
06:55:04 | bramm: | why not, say, tokyo tyrant, or is leveldb functionally equivalent? |
06:56:13 | gmaxwell: | bramm: it would request multiple blocks at once, but it was just slow in every way... keep in mind that the history has 52,266,519 transactions, and probably 10x that number of utxo. It's not a small task. Git master can full sync to tip in about 2.5 hours on a fast desktop... its a lot of work to do in a small time. |
06:56:30 | Taek: | * Taek scrolls for 12 minutes looking for the start of the conversation |
06:56:56 | bramm: | How does it have 10 times as many utxos as transactions? |
06:57:01 | gmaxwell: | bramm: all we need is a key value store with atomic transactions. |
06:57:07 | bramm: | Don't you need a transaction or a mining operation to make a utxo? |
06:57:16 | gmaxwell: | bramm: because some transactions have hundreds of outputs. |
06:57:31 | bramm: | Taek, might take you a while, I've been here all day |
06:57:36 | midnightmagic: | Taek: 07:40m ago |
06:58:30 | midnightmagic: | bramm: A single transaction can have multiple individual outputs. |
06:58:51 | bramm: | midnightmagic, but a factor 10? What kind of transaction would you be doing for that? |
06:59:01 | bramm: | Unless everything is basically mining pool redistribution |
06:59:17 | gmaxwell: | bramm: the median transaction has two outputs (payment and change) |
06:59:55 | midnightmagic: | bramm: p2pool and other mining operations can sometimes includes hundreds of outputs in coinbase. early forms of spam would do it. current spam + mining collusion are doing it. plus the normal -core transaction only rarely has a single output, usually it has two.. ah, what gmaxwell said. |
07:00:02 | bramm: | but doesn't the mean transaction have to have 10 outputs for there to be 10 times as many utxos as transactions? |
07:00:20 | gmaxwell: | Then some services batch their payments to reduce the transaction count, uses will sometimes make multiple payments in a single transaction (which is supported in the ui in bitcoin core and I think most wallets). Various mining pools, lotteries, ... etc. use large numbers out of ouputs. |
07:00:43 | midnightmagic: | ah, right, satoshidice bloat. |
07:00:53 | gmaxwell: | bramm: I might have been exagerated there, I'm not sure what the mean is. Don't have a quick way to count it. I know it's more than 3. |
07:01:05 | op_null: | midnightmagic: down to a third of the block chain now. |
07:01:05 | midnightmagic: | "come help me stress test the bitcoin blockchain" as voorhees said. |
07:01:11 | bramm: | gmaxwell, A fascinating data point, regardless |
07:01:19 | midnightmagic: | op_null: brutal man |
07:01:43 | go1111111: | "How likely are side chains to actually happen?" -- you know gmaxwell and a bunch of other smart dudes just raised 21 million dollars from Reid Hoffman and some others to make sidechains a reality right? so my guess is more likely than not |
07:01:49 | gmaxwell: | I can count the ratio for the existing utxo set easily enough. But that may not reflect the overall history well. |
07:02:40 | bramm: | I have a suspicion that a huge fraction of all the payouts to date are mining pool redistribution |
07:02:48 | gmaxwell: | go1111111: That the systems get build it not a guaretee that they become a market reality however, which is why I wasn't going to debate it. Best to show. :) |
07:03:41 | gmaxwell: | bramm: they're a lot, but you underestimate the really inefficient gambling services I suspect. :) also other 'interesting' things like dividends on cryptostocks. (read: unregulated securities traded primarly by conartists :) ) |
07:04:18 | bramm: | midnightmagic, what are the 'current spam' and 'mining collusion'? |
07:04:47 | bramm: | gmaxwell, Yes there's an obvious business model for 'interest bearing accounts' |
07:05:16 | midnightmagic: | bramm: Spam still exists in the form of transactions of very tiny one-or-two satoshis, including a message (which shows up in blockchain.info because they're stupid) and since most nodes don't accept it, I'm presuming mining collusion to get them actually into the blockchain. |
07:05:46 | bramm: | midnightmagic, Ah gotcha |
07:06:26 | phantomcircuit: | midnightmagic, it's people running patches which include all valid transactions into the block prioritized by fee/kb only |
07:06:35 | gmaxwell: | bramm: blockchain.info has a really ill advised 'send a message with your transaction' feature, it doesn't actually put the data into the blockchain (somehow we managed to yell them out of doing that)... but there are a whole bunch of idiots that use that feature to send advertisments to people. The aformentioned filtering blocks most of it, but some gets through. |
07:07:05 | bramm: | On side chains, I can see the political force behind getting them accepted, but I also see tremendous technical barriers on the other side, including that it's a hard fork |
07:07:34 | phantomcircuit: | not a hard fork |
07:07:37 | gmaxwell: | so the current utxo set has 15103752 txouts from 4194830 transactions. So 3.6, how this reflects the history is anyones guess... since spending patterns probably are different for many out vs few out transactions. |
07:07:47 | gmaxwell: | bramm: oh, it's not a hard fork. |
07:08:04 | op_null: | bramm: the sort of neat thing is that opcodes can be "added" without a hard fork. |
07:08:07 | phantomcircuit: | gmaxwell, i've probably been increasing the ratio a bunch |
07:08:14 | phantomcircuit: | all the cloudhashing payouts are sendmany |
07:08:16 | dgenr8: | interest accounts... i'm not sure where the demand for bitcoin loans would come from today. other than those who want to short-sell it |
07:08:29 | phantomcircuit: | (they're all paid from the same address anyways so it doesn't much matter) |
07:08:38 | gmaxwell: | bramm: and they can be used with a security reduction without any changes to bitcoin at all. (see appendix A) so there is an on ramp for adoption prior to convincing people that its a good thing to have. |
07:08:45 | bramm: | Oh god, the headline list on blockchain.info... |
07:09:07 | op_null: | bramm: you'll notice the top item, though it's not stated, it's paid advertising. |
07:09:32 | gmaxwell: | (and we've extended script successfully in the past with soft-forks and are likely to do so again very soon; we have two soft-fork script changes pending that basically just slipped 0.10) |
07:09:39 | bramm: | op_null, Yeah that screams 'this is a scam' if ever I've seen one |
07:10:10 | bramm: | gmaxwell, How are releases of pegged coins treated by non-updated software? |
07:10:15 | op_null: | bramm: blockchain.info are fools on a number of levels, that barely scratches the surface. |
07:10:32 | gmaxwell: | bramm: everything about that site is ... unfortunate. pretty much. I like the fact that they have a JS web wallet on the same domain that displays weakly validated third party data (ads and data scraped from the blockchain)... they've had several severe xss bugs. |
07:11:09 | midnightmagic: | bramm: tangential, potentially unfortunate side-effect of blockchain.info being irresponsible: https://github.com/bitcoin/bitcoin/issues/2653 |
07:11:35 | gmaxwell: | bramm: For script enhancements in soft-forks, old nodes see payments to the new script types as "non-standard, anyone can spend" so while they won't relay or mine them, they'll happily accept anything that spends them in a block. |
07:12:41 | bramm: | gmaxwell, Couldn't that cause a miner to accept them being spent counter to the new script rules? |
07:13:30 | bramm: | Do you ever get the feeling that the goodwill of your good crypto work is being exploited by overt scammers to make a lot more money off of it than you ever will? |
07:13:51 | bramm: | (my last two comments were totally unrelated) |
07:14:12 | gmaxwell: | Yes/no. Because they're non-standard, miners won't add them to blocks themselves. But if someone else does, they'll accept the block. The way soft forks work is that miners signal in blocks with a flag, that "e.g. when 950 of the last 1000 blocks have this flag, I'll start imposing this new rule" |
07:16:01 | gmaxwell: | bramm: I've sometimes felt that way, at times _but_ (1) most of the scammers aren't actually making much money, and aren't manging to hold onto it if they do. (2) that someone else benefits isn't really my problem (3) if they do manage to get some more crypto stuff developed and deployed thats good for everyone ultimately (so far, track record here is poor). |
07:16:16 | bramm: | So as a matter of policy miners will play nice, but if one wanted to be a jerk it could really fuck things up? |
07:17:36 | gmaxwell: | bramm: well kinda, what happens is that after the enforcement has kicked in, e.g. 950 of 1000... if they put in a bogus spend, the 5% of miners who are not updated it yet will extend that block and potentially get orphaned along with them. There will be a bit of chain reorging, but its just at the tip. So they can be disruptive, but it's pretty costly. |
07:17:54 | bramm: | Er, well, I guess if you waited until most miners said they spoke the new thing, then miners who spoke the new thing could outright ignore history which has 'bad' transactions in it |
07:17:55 | Taek: | can someone confirm this: a miner with a properly strict set of rules for rejecting non-standard transactions can still produce blocks that the network will accept after a softfork? |
07:18:08 | gmaxwell: | bramm: yep |
07:18:22 | gmaxwell: | Taek: absolutely, thats part of the goal. |
07:18:27 | bramm: | gmaxwell, I understand now. Are there any plans on what that threshold might be for side chains? |
07:19:25 | gmaxwell: | For p2sh it was more like 70% and the txn weren't even non-standard so unmodified old miners would mine garbage. It caused some moderate amount of forking. We agreed that was too permissive after the fact. |
07:20:00 | bramm: | How long did the 70% take to happen? |
07:20:05 | Taek: | if softforks are not harmful to conservative miners, why wait until a full 95%? Seems like you could execute successfully at a lower percentage, maybe 85% - especially if you're looking at the past 1000 blocks. |
07:20:07 | gmaxwell: | For height in coinbase the lock-in was at 95% and that took a long time to hit. I think that was perhaps to agressive in retrospect... esp for something where unmodified miners can't footgun themselves. |
07:20:17 | op_null: | bramm: the one that gets me is people being paid to stand up in front of people and present themselves as a cryptography expert. earning money for spreading misinformation is pretty insane. |
07:20:52 | gmaxwell: | bramm: two months? something like that. |
07:21:04 | bramm: | op_null, Given that schneier is viewed as the world's preeminent crypto expert, I think that one's a lost cause |
07:21:46 | op_null: | bramm: heh, you can go far worse, trust me. |
07:21:58 | gmaxwell: | we need to make these decisions for BIP62 and BIP65 soon. |
07:22:14 | gmaxwell: | op_null: show him that image. :P |
07:23:18 | gmaxwell: | For BIP65 I'm going to probably suggest 70-80% and see what people argue. In any case, at a minimum it must be a decisive supermajority, so that if any doofus does produce an invalid block its rapidly orphaned. |
07:23:33 | op_null: | https://i.imgur.com/tvAlMPf.png - this one? |
07:24:35 | gmaxwell: | E.g. 51% is pretty pessimal, someone produces an invalid block and it might take an uncomfortably long time for the majority to orphan it. This doesn't generally present _that_ much risk, since the same transactions will be in both sides of the fork, for the most part... but any large fork has some risk. |
07:25:35 | bramm: | Of course, being able to play such games raises the question of how decentralized bit coin really is... |
07:26:59 | bramm: | op_null, Oh I'm sure you can |
07:27:09 | gmaxwell: | Well soft forking power is very narrow, you can only restrict the set of valid transactions. ... and it's no secret that Bitcoin miners can freely censor the chain. ... I don't know why people aren't more frank about that, its a very serious limitation. |
07:27:53 | gmaxwell: | but a helpful one in that it allows smoothly rolling out targeted fixes and enhancements. |
07:28:15 | gmaxwell: | soft forks have been used to fix several end of the world bugs. |
07:28:17 | bramm: | My experience has been that people throw a shitfit if you say anything other than than bit coin farts rainbows |
07:29:08 | gmaxwell: | bramm: well it's been worse since early 2013... got a LOT of new users since then, and "sepetember" hasn't ended yet. |
07:29:11 | bramm: | By 'soft fork', you mean changes where something which would have been accepted before is no longer accepted now? |
07:29:25 | gmaxwell: | bramm: correct. |
07:29:35 | bramm: | I literally got on the net in september '93 |
07:31:10 | gmaxwell: | :P (thats roughly when I got reliable net access myself, to be fair) |
07:31:40 | gmaxwell: | E.g. in the original bitcoin software you could spend any coin with the signature (effectively) "return true". |
07:32:07 | gmaxwell: | a most unhelpful feature that was removed via a soft-forking change. :P |
07:32:55 | bramm: | That would seem more bug fix than soft fork |
07:33:07 | op_null: | you could also blow up the stack and crash the node trying to validate a script by using OP_CAT. that was removed in the hard fork early on. |
07:33:11 | gmaxwell: | Well, one in the same, that distinction is social, not technical. |
07:33:51 | gmaxwell: | op_null: soft fork. (if that change was actually a hardfork due to some subtle reason, e.g. the encoding straddling, it's still latent) |
07:33:57 | bramm: | I'm rather skeeved by the expressiveness of bit coin's coin spending syntax |
07:34:06 | op_null: | * op_null squints |
07:34:22 | op_null: | I could have sworn pieter mentioned it as being a hard fork |
07:34:39 | op_null: | must be misremembering or something |
07:35:07 | gmaxwell: | bramm: The justification is straight forward though... what use is a strongly decenteralized cryptocurrency system if it lacks enough affordances to go build decenteralized services on top of it? A coin that you can only use in interesting ways by handing it over to third parties, might as well be centeralized to begin with (and thus have much better performance/scale) |
07:35:50 | bramm: | gmaxwell, the only sane use I've heard of it so far is supporting multi key spending. Side chains are a whole other ball of wax |
07:36:02 | gmaxwell: | op_null: there is some back and forth over if it is or isn't based on a push opcode straddling the script concat boundary. I think we've gone back and forth over it being hardforkable, but whichever it actually is, the fork hasn't been triggered yet. |
07:37:14 | gmaxwell: | bramm: oh, it's used for other stuff. Just not in any great amount. Some of the uses are hampered by some other system limitations, basically protocols thats require refund transactions to avoid holdup risk are unsafe currently, and this is fixed by BIP62 and BIP65. So we may see more uses. |
07:37:28 | op_null: | gmaxwell: bah, don't pose it like a challenge to peter. I was trying to work out why there's a few SignatureHash() : nOut=2 out of range errors during sync. googled it and his name on a hunch, and what do you know. |
07:38:27 | gmaxwell: | bramm: most common uses beyond multisig are hash-reveal transactions which have been used for atomic swaps. There are also some lottery transaction uses, but no shock the poeple savvy enough to do secure decetneralized coinflips aren't really into gambling. :) |
07:38:28 | bramm: | Other uses, like satoshi dice :-) |
07:39:24 | bramm: | What would you be swapping with an atomic swap? |
07:41:15 | Taek: | securely trading bitcoins for othercoins |
07:41:40 | bramm: | Oh, you mean you put the same hash reveal on the other coin? |
07:41:54 | bramm: | That's cute, takes care of the only 'real' use of smart transactions I know of |
07:42:37 | bramm: | What are 'return transactions'? |
07:43:09 | op_null: | bramm: run and tell the ethereum developers :) |
07:43:16 | gmaxwell: | But even relatively simple 'powerful' features can be used to do a lot at least thoretically; https://en.bitcoin.it/wiki/Zero_Knowledge_Contingent_Payment ... but look at the state of cryptographic software in general. A lot of things are possible just just don't have tools for them. (I rant: https://en.bitcoin.it/wiki/User:Gmaxwell/things_im_surprised_dont_exist ) |
07:43:49 | bramm: | op_null, The ethereum developers are on their own, I think it would be a waste of my time, and possibly a danger to my person, to try to help them |
07:43:56 | gmaxwell: | bramm: yea, you can trade with compatible altcoins; or with a slightly more complex transaction pattern for other people's bitcoins in a way that doesn't link the tansactions. |
07:44:25 | op_null: | bramm: agreed completely. |
07:46:00 | bramm: | gmaxwell, How does the unlinking transaction work? |
07:48:42 | bramm: | gmaxwell, On your list of things which don't exist, re: high latency mixes, that's what pynchon gate is, but there's been little interest in it so it hasn't been built. There are more secure but less scaleable things in the literature for doing voting, because that requires a decrypt + unknown permutation operation, but those just exist in the academic literature |
07:48:51 | gmaxwell: | yes, all these things exist in the lit. |
07:49:03 | gmaxwell: | They don't exist in any pratical sense as far as I know though. |
07:49:25 | bramm: | threshold cryptography is used in nuclear weapons |
07:49:27 | gmaxwell: | you and I generate keys. You and I place our respective coins into 2 of 2 escrows, but don't publish the escrow placing transactions until we've recieved timelock refunds. Then we publish our escrow payments, locking up our coins. Then we exchange releases of the payments that look like payments into an atomic swap, but do not publish them. After doing this we are both confident that we can get our funds via the swap should the ... |
07:49:33 | gmaxwell: | ... other party defect, and so we can just happily pay out of the escrow directly. ... thus keeping the hashlocked atomic swap out of the chain. This is a really short description, a full protocol is at https://bitcointalk.org/index.php?topic=321228.0 |
07:50:15 | gmaxwell: | bramm: it's also used in Bitcoin. :) considering how compromised every computing device out there is, you'd think that we ought to use it for a bit more than bitcoin and nukes. |
07:51:10 | bramm: | Okay I'll put that on my list of things to read later, too exhausted right now |
07:52:47 | bramm: | I'm still not sold on anything beyond multiple keys being all that useful, but am open to being convinced |
07:55:58 | bramm: | gmaxwell, On steganography there are deep and interesting problems in making it practical. I came up with a big improvement in the state of the art but whether it's 'useful' is another matter |
08:20:43 | SubCreative: | SubCreative is now known as Sub|zzz |
08:28:27 | cbeams_: | cbeams_ is now known as cbeams |
09:05:16 | wolfe.freenode.net: | topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja |
09:05:16 | wolfe.freenode.net: | Users on #bitcoin-wizards: andy-logbot cbeams todaystomorrow damethos AaronvanW orik Graftec prodatalab NewLiberty rusty folksngo1ts samson_ spinza mortale devrandom fanquake TheSeven Guest100 DoctorBTC coiner MoALTz_ c0rw|sle_ Flyer33 zooko wallet42 Dr-G3 mkarrer woah OneFixt ryanxcharles tdlfbx zwischenzug moa atgreen Alanius RoboTeddy epscy cashmen jbenet HM dansmith_btc dgenr8 cryptokeeper nuke1989 Guest48334 hguux_ michagogo btc__ Nekokamisama mappum Muis Qfwfq |
09:05:16 | wolfe.freenode.net: | Users on #bitcoin-wizards: copumpkin HaltingState adlai bosma azariah4 nanotube waxwing Hunger- orw MRL-Relay fluffypony Shiftos PRab tromp berndj jgarzik PaulCapestany luny K1773R BlueMatt a5m0 xmk3 Pan0ram1x Starduster kgk sneak sipa LaptopZZ Sub|zzz Luke-Jr Emcy LarsLarsen Baz___ gazab Guest39111 po1son3r123 SomeoneWeird AdrianG iddo Cory jaekwon_ yoleaux ebfull xabbix heath gwillen JohanTitor livegnik prepost BananaLotus d4de Myagui @ChanServ bitname btcdrak |
09:05:16 | wolfe.freenode.net: | Users on #bitcoin-wizards: dansmith_btc2 burcin coutts wizkid057 phantomcircuit arowser Nightwolf paperbot Dyaheon bbrittain iambernie NikolaiToryzin forrestv null_radix nickler bobke_ warptangent mr_burdell Logicwax zibbo Meeh kanzure tromp_ Krellan poggy pi07r_ comboy_ mmozeiko lnovy Taek optimator_ [\\\] Apocalyptic throughnothing petertodd crescendo CryptOprah cfields sl01_ Fistful_of_Coins gmaxwell kinlo ahmed_ so Anduck lclc [d__d] gnusha_ espes___ BigBitz otoburb |
09:05:16 | wolfe.freenode.net: | Users on #bitcoin-wizards: wumpus EasyAt starsoccer hollandais fenn jaromil helo Keefe Iriez Eliel jrayhawk huseby phedny midnightmagic nsh coryfields andytoshi BrainOverfl0w fds4345 warren gavinandresen lechuga_ abc56889 harrow roasbeef ryan-c [Tristan] TD-Linux catcow danneu amiller smooth Graet gribble asoltys pigeons |
10:09:54 | lclc: | lclc is now known as lclc_bnc |
10:23:40 | cashmen: | cashmen has left #bitcoin-wizards |
10:34:16 | lclc_bnc: | lclc_bnc is now known as lclc |
11:11:47 | c0rw|sle_: | c0rw|sle_ is now known as c0rw1n |
11:40:52 | c0rw1n: | c0rw1n is now known as c0rw|work |
11:51:38 | c0rw|work: | c0rw|work is now known as c0rw1n |
14:37:40 | fanquake: | fanquake has left #bitcoin-wizards |
15:08:26 | andytoshi: | in case anyone is curious there are 15103496 utxos right now. i don't know what the total that ever existed is (it would be easy but my rust toolchain is not working right now so i can't change my code..) |
15:16:42 | sipa: | i have 15097494 |
15:16:59 | kanzure: | huh i am not even 0.006% |
15:17:02 | sipa: | ? |
15:17:27 | kanzure: | thought i had proportionally more outputs than that |
15:17:42 | sipa: | haha |
15:18:30 | kanzure: | nope nevermind i am just bad at math |
16:57:03 | andytoshi: | sipa: i'd expect you are right |
16:57:44 | andytoshi: | iirc my counts do not go down during reorgs |
16:58:27 | andytoshi: | (which is a bug, not deliberate) |
17:04:06 | nullbyte_: | nullbyte_ is now known as Guest27678 |
17:07:57 | sipa: | andytoshi: and it doesn't exclude pruned outputs |
17:31:46 | NewLiberty: | Still, it might be a useful bug. |
17:32:32 | andytoshi: | iirc mine doesn't exclude pruned outs either, i could be misremembering |
17:32:39 | NewLiberty: | So long as you also have the pruned set. |
17:33:25 | NewLiberty: | the diff would be interesting |
17:34:04 | andytoshi: | why would i keep the pruned set around? the diff is not super interesting, almost all the pruned outs are op_returns |
17:34:23 | andytoshi: | i spent a ton of time writing an unspendability prover and it turned out to be really not useful :) |
17:45:38 | op_null: | gmaxwell: I quite like parts of the "Deanonymisation of clients in Bitcoin P2P network" paper, particularly abusing addr messages to estimate the number of connections to a node. |
17:50:14 | op_null: | I think there's quite a good justification for people doing more interesting setups of their nodes just to mess with this sort of analysis. they'd probably struggle with any node that also used bluematt's fast transaction relay for example, as it can usually outrace the P2P network on most transactions. |
17:50:26 | amiller: | op_null, if you liked that, you'll like my upcoming paper too |
17:51:17 | op_null: | similar sort of stuff, or just equally interesting? |
17:51:25 | amiller: | similar stuff |
17:51:43 | op_null: | I've long been interested in the number of sniffer peers that roam the network. |
17:52:31 | op_null: | (to define that, peers which are not normal nodes, pretend to be /Satoshi/, download all inventory but send nothing back) |
17:53:10 | op_null: | but yes, I'll be more than interested to read it |
17:53:16 | lclc: | lclc is now known as lclc_bnc |
17:54:31 | amiller: | there are a handful of nodes that appear to connect to absolutely everyone, markets.blockchain.info is one |
17:56:08 | op_null: | blockchain.info's nodes are quite funny actually. they're terribly modified 0.7.x branches of the Satoshi client, the connection limit raised, mempool disabled, and all of the timings in the reconnection section removed. |
17:57:00 | amiller: | op_null, how do you know that? i didn't look for source for their actual node |
17:59:09 | op_null: | amiller: it's not published, you can just see from their network behaviour what it is. if you don't filter their servers you almost always end up being connected to by them. if you want their IP addresses you just look at https://blockchain.info/connected-nodes until you appear on them. |
18:00:39 | amiller: | yeah |
18:02:22 | op_null: | for a while they had all the transaction validity checks disabled in order for Mt Gox's invalid transactions to show up in their block explorer. https://people.xiph.org/~greg/21mbtc.png |
18:04:37 | JeremieDeNoob: | how does the address system work? is an ip address encoded in the address data or is the address registered on the network by the user who creates it? |
18:06:51 | op_null: | JeremieDeNoob: addresses are discovered in 3 ways depending how the node is operating. they attempt to use their own rumoured lsit of peers from previous runs, if that fails they use DNS to connect to a seed to find more valid IP addresses, and if that fails they attempt to connect to hardcoded peers. |
18:07:58 | op_null: | with the rumouring system peers choose to announce themselves to their peers, or not, depending how the node is configured. |
18:11:17 | JeremieDeNoob: | hmmm |
18:11:35 | JeremieDeNoob: | and what happens if you send coins to a non existant address? |
18:11:58 | op_null: | this is probably more a question for #bitcoin. |
18:12:22 | andytoshi: | JeremieDeNoob: #bitcoin please |
18:12:29 | andytoshi: | this is a research channel |
18:27:20 | Guest48334: | Guest48334 is now known as artifexd |
18:34:39 | Sub|zzz: | Sub|zzz is now known as SubCreative |
18:41:19 | cbeams_: | cbeams_ is now known as cbeams |
19:23:31 | sl01_: | G |
20:48:48 | pigeons: | another proof of stake blog post from vitalik |
20:50:41 | op_null: | "..particularly with regard to the supposedly fundamental “nothing at stake” problem. As it turns out, however, the problems are solvable.." |
20:52:50 | pigeons: | the solution for new nodes appearing on the network that don't know the current state is to ask blockchain.info ;P |
20:53:01 | pigeons: | really says that |
20:54:16 | gmaxwell: | yes, indeed many systems work just fine if you invoke centeralization. Of course you can also dispense with the blockchain, mining, etc, entirely under that model.... and in doing so get something much more scalable. |
20:54:47 | andytoshi: | does pos.pdf say this? "obviously you can evade this by changing the security model" |
20:55:35 | andytoshi: | actually w/e, if that needs to be said the reader is hopeless |
20:55:39 | andytoshi: | or shilling |
20:55:48 | op_null: | pigeons: oh hell it does say to trust blockchain.info. |
20:56:52 | kanzure: | andytoshi: i think saying that is quite relevant |
20:57:00 | gmaxwell: | andytoshi: I thought it was clear enough; but perhaps that point deserves to be made more clear... not because the real audience of the paper needs to hear it, but because it's used by confused or dishonest people to be overly dismissive. |
20:57:19 | sl01: | "This security assumption, the idea of “getting a block hash from a friend”, may seem unrigorous to many; Bitcoin developers often make the point that if the solution to long-range attacks is some alternative deciding mechanism X, then the security of the blockchain ultimately depends on X, and so the algorithm is in reality no more secure than using X directly" |
20:57:20 | kanzure: | andytoshi: to someone trying to think about distributed consensus, it may not be obvious to them that the solution is to not use distributed consensus |
20:57:33 | sl01: | "However, this logic ignores why consensus algorithms exist in the first place. Consensus is a social process, " |
20:58:11 | kanzure: | whether or not it is a social process will not violate relativity |
20:58:29 | andytoshi: | gmaxwell: i'll think about it. kanzure made a point to me last night that my documents are not really structured for the target audience we currently have for them |
20:59:03 | kanzure: | right, something like "your readers hate you" |
20:59:13 | andytoshi: | that is, they were written for people who wanted to be wizards, not laypeople dealing with well-funded charlatons.. |
20:59:20 | gmaxwell: | to some extent this is making the same mistake that the patent office makes in thinking that an algorithim running in the mind of a man is fundimentally different than running in semiconductors. |
20:59:47 | kanzure: | yes, people do not have magical relativity violating powers |
20:59:51 | gmaxwell: | andytoshi: well tis not our remit to save everyone from charlatons. |
21:00:19 | kanzure: | certainly |
21:00:49 | op_null: | andytoshi: your documents do well arguing to say, a hacker news reader. |
21:01:10 | gmaxwell: | andytoshi: Surely we'd like to do some of that as a side effect; but at least what I've thought much of the goal was being able to sync up people like e.g. kanzure with basic premises we understand. |
21:01:21 | kanzure: | instead of writing "You are wrong because X" it may be more helpful to write "Here are some things that are important, and here is what Bitcoin does, and here is why some related proposals are totally broken and not worth your attention. Sorry if it wasn't apparent that this was contested, but I can't be held responsible for the levels of junk out there." |
21:01:22 | moa: | well the algorithm running in a semiconductor can be objectively measured and has different physical constraints |
21:02:07 | kanzure: | (the apology is not necessary of course) |
21:02:13 | gmaxwell: | moa: this isn't really making the case though... I mean, it makes the case for more "Computational security against review", perhaps, but I doubt security against review should be anyone's goal. :) |
21:02:49 | pigeons: | usaully the repsonse i get to andy's papers are "he didn't talk about ~my~ PoS algorithm or ~my~ twist on making an asic resistant system" |
21:02:54 | op_null: | andytoshi: I'm not sure how much contact you have with altcoins, but there's an interesting behaviour that's developed around pos.pdf now. it's seen as the "FUDers post this but they're wrong" easy way out of the argument, because of course NXT has solved all of those problems but the author is too dumb to realise it. |
21:03:16 | kanzure: | andytoshi: arguably bram doesn't want to be a layperson, and he was extremely put off by the "Is ASIC resistance desirable? No." format to the point of not even reading past that. |
21:03:19 | andytoshi: | op_null: i've seen responses on reddit which dismiss it, but never ever a coherent argument |
21:03:23 | andytoshi: | about a single sentence |
21:04:10 | kanzure: | "NXT exists and works because centralization" might have to be made more obvious |
21:04:12 | andytoshi: | which makes me suspect that none of them have actually read it, because i have and found some typos, and there is no way i was totally correct in a document i one-shotted in an 8 hour "i should finally teach myself what pos is about" session |
21:04:18 | gmaxwell: | well again, the point of the document as it stands is not to argue that case against every carnie; ... it's sadly infeasable to do that because they're adaptive and unbounded in number. |
21:04:30 | gavinandresen: | gavinandresen has left #bitcoin-wizards |
21:04:30 | op_null: | andytoshi: I don't think you'll get one out of NXT supporters in particular. they can't even describe their security decisions let alone how your document doesn't apply to them. their misgivings hinge on the fact that NXT now doesn't allow reorgs, which is the basis of their security assumptions now to some degree. |
21:04:40 | kanzure: | (of course, you don't have to say NXT explicitly. just "you can use centralization if you want, but the other parts become irrelevant and pointless") |
21:04:48 | helo: | carnie >.> |
21:05:15 | kanzure: | gmaxwell: how would you explain bram's eyebrow dismissal then? |
21:05:17 | gmaxwell: | helo: "get your popcoin, peanuts! altcoins gets your altcoins! hot salted altcoins! come see the bearded lady! hot salted altcoins" |
21:05:20 | andytoshi: | op_null: so, my response to that is simply "what the actual fuck", i unfortunately have developed an emotional allergy to trying to parse such confusion, so i don't |
21:05:23 | helo: | haha |
21:05:29 | op_null: | andytoshi: it's of course all well and good that people have to do manual reorganisations, but like vitaliks post it means that new people joining the network still have to "phone a friend" to get consensus. |
21:05:29 | kanzure: | arguably bram is someone that matches the target demographic |
21:06:01 | andytoshi: | bram is, and i think we were pretty successful with him once we convinced him we were not morons and he should maybe read our stuff |
21:06:21 | kanzure: | yes but it involved one-on-one hand holding |
21:06:38 | kanzure: | and he did look at the paper, but then dismissed it immediately because it disagreed with him blatantly ("Is asic resistance desirable? No.") |
21:06:47 | gmaxwell: | kanzure: he dismiseed it out of hand with a really powerfully bad bit of reasoning. Not sure there is much to do about that. |
21:06:49 | andytoshi: | well, he is uniquely qualified to be dismissive.. |
21:07:07 | kanzure: | i think that if the first word was not "no" he may have read on :) |
21:07:11 | andytoshi: | so some hand-holding does not worry me |
21:07:14 | op_null: | gmaxwell: expecting a lady and getting a drunken, shaved bear pretty well sums up altcoins. |
21:07:25 | gmaxwell: | I'd agree that he's in the target audience, but to some extent on the fringe of it. |
21:07:55 | moa: | vanguard maybe |
21:08:05 | gmaxwell: | op_null: "You can do this one out of every 30 times and still have 97% positive feedback." |
21:08:23 | op_null: | gmaxwell: wait that was pig faced woman, not bearded ladies that were shaven bears. |
21:09:03 | gmaxwell: | kanzure: (wrt fringe) I mean to the extent that his interest in patience in learning what is already known is demonstratively low. |
21:09:25 | gmaxwell: | maybe improving now that he's (maybe) less dismissing everyone involved in bitcoin as idiots. |
21:10:08 | kanzure: | sure maybe he thought all of the altcoin-motivated-stuff was bitcoin |
21:11:01 | kanzure: | maybe making an outline of things that are already known would be helpful |
21:11:12 | kanzure: | and then references/links to irc logs (or whatever else) can be added later as they turn up |
21:12:16 | op_null: | why is vitalik still going on about ASIC resistance too? |
21:12:18 | kanzure: | i wonder if someone can replace nxt (and related) with thin clients connecting to a central server, and convince users to switch somehow |
21:12:32 | moa: | like an O'Reilly's for bitcoin maybe? |
21:12:39 | tromp_: | Is ASIC resistance desirable? One kind of resistance is, the other is not. |
21:13:00 | op_null: | tromp_: I'm pretty sure you have that term set as a ping in your client :) |
21:13:01 | Luke-Jr: | tromp_: it's undesirable and impossible in theory |
21:13:14 | Luke-Jr: | s/in theory/by definition/ |
21:13:24 | tromp_: | my client only pings for tromp:( |
21:13:34 | gmaxwell: | tromp_: when you also considering botnets and amazon, it's not clear to me that any kind is. But I full agree thats debatable. |
21:13:54 | gmaxwell: | Luke-Jr: impossibility doesn't imply undesirablity. :) |
21:14:03 | Luke-Jr: | gmaxwell: I know, but it's both :p |
21:14:21 | helo: | paypal is not centralized nxt? |
21:14:37 | gmaxwell: | kanzure: how do you know they haven't already? |
21:14:42 | op_null: | kanzure: there's at least one altcoin that is pretty much that. |
21:14:42 | Luke-Jr: | helo: only if owning PP currency meant you had PayPal stock? |
21:14:59 | tromp_: | the kind thats not desirable is having a complex compute intensive pow requiring a very elaborate asic design |
21:15:04 | kanzure: | gmaxwell: because then they would be less intersecting the bitcoin world, i would hope |
21:15:05 | helo: | hmmm |
21:15:18 | tromp_: | that's the kind inviting the No response |
21:15:21 | op_null: | kanzure: there's one which is like 100k lines of obfsucated javascript with hardcoded passwords. for all we know it's totally centralised, and yes it's a traded thing that people use as a real altcoin. |
21:15:57 | Taek: | Paypal has a federated 2 way peg with the US dollar |
21:15:59 | gmaxwell: | tromp_: you can seperate that further, complex design (NRE frontloading) is undesirable and potentially quite harmful, because it eliminates competition for hardware. |
21:16:21 | gmaxwell: | Taek: except as a federation it's particularly sucky, as it's a single party. |
21:16:25 | gmaxwell: | and it's unauditable. |
21:16:47 | gmaxwell: | And even if you can prove they've violated the protocol, it has no effect. |
21:17:07 | tromp_: | the other kind is that any conceivable (in familiar process technology) ASIC design won't have a huge performance gap with commodity hardware |
21:17:21 | tromp_: | i would say that kind is desirable |
21:17:35 | Taek: | gmaxwell: if PayPal acts sufficiently dishonestly the FBI will get involved, so it's not exclusively 1 party |
21:17:58 | Luke-Jr: | tromp_: now explain what stops me from taking the commodity hardware and removing the parts I don't need to make an ASIC that performs better |
21:18:06 | helo: | heh, it would be interesting if centralized-nxt had a split between "everyone else" and the central point |
21:18:26 | Luke-Jr: | tromp_: the only way to do that is to make the commodity hardware *be* the ASIC itself - but that's not ASIC resistence, it's ASIC-is-already-deployed |
21:18:30 | tromp_: | ok, Luke, imagine a cheap octa-core arm that has siphash as a native instruction |
21:18:50 | tromp_: | you could hook that up to some dram chips and run cuckoo well. |
21:18:58 | Luke-Jr: | tromp_: then you've got the rest of the ARM core eating electricity, producing heat, and slowing it down |
21:19:04 | gmaxwell: | tromp_: maybe there should be a flowchart. I think that if/once you get down to gap reduction it's still not a closed argument, because botnets and commidity hardware; more considerations show up. |
21:19:35 | tromp_: | or run the cuckoo logic on an fpga |
21:20:02 | tromp_: | the mining cost may still be dominated by dram cost |
21:20:09 | op_null: | FPGA have an annoyingly high cost for their speed |
21:20:34 | tromp_: | but cuckoo doesn't need much speed! only enough to saturate the dram |
21:20:41 | Luke-Jr: | IIRC some FPGA patents are expiring soon? ;) |
21:20:54 | gmaxwell: | Luke-Jr: yea... hopefully the fpga market should heat up. |
21:21:18 | tromp_: | any exotic ram, even if faster, will not be cost competitive |
21:22:05 | tromp_: | Luke, that arm core could produce less heat than the dram it's keeping busy |
21:22:51 | op_null: | isn't the power consumption of dram almost nothing? |
21:23:07 | tromp_: | my claim is that $100 worth of custom hardware will not be orders of magnitude more efficient than $100 of commodity hardware |
21:23:58 | tromp_: | which will keep many people mining on their existing hardware, even at a loss |
21:24:54 | gmaxwell: | tromp_: I have bitmain S1 miners here, underclocked as they are they're less than half as power efficient as my spoondooles S10... but they are turned off. |
21:25:04 | gmaxwell: | er not less than half. |
21:25:17 | gmaxwell: | (they're about 75% as power efficient) |
21:25:26 | tromp_: | yes, gmaxwell, but they're just obsolete custom hardware |
21:25:50 | tromp_: | people don't turn off their desktop just because it's 2 years old |
21:26:06 | tromp_: | it still works fine for them |
21:26:23 | gmaxwell: | They stopped cpu mining with bitcoin while it was still power-profitable to do so, even at $0.3/kwh. |
21:27:32 | tromp_: | the commodity-custom gap is just way too big for bitcoin, and for scrypt as well |
21:27:38 | gmaxwell: | Pretty sure I was the last guy left still cpu mining (with my gobbs of opteron cores), meanwhile people were all actively telling each other that cpu mining was stupid and pointless. We don't need to speculate about this. |
21:27:59 | moa: | how can a computer authenticate that it is interfacing/communicating with a human? |
21:28:16 | helo: | moa: an oracle |
21:28:30 | kanzure: | (you have to cheat) |
21:28:49 | moa: | like a reverse turing test? |
21:28:56 | op_null: | tromp_: I think you are overestimating memory power usage somewhat. a random chip I picked off mouser screams along at 1.6GHz and draws 0.4W while doing it. 1.6W/GB or something like that. |
21:29:22 | tromp_: | yes, it's on the order of 1W per memory chip |
21:30:02 | kanzure: | moa: no |
21:30:53 | tromp_: | but an arm core won't have to do many siphashes to saturate that memory chip and could easily use less than 1W |
21:31:25 | tromp_: | an fpgu would do it with milliwatts |
21:31:29 | tromp_: | fpga |
21:32:30 | op_null: | that's still a lot better than my PC hardware. it draws something like 75W at idle. |
21:32:55 | Eliel: | gmaxwell: Well, it's probably not worth the time to keep checking that the miner is still running fine if you don't make at least some minimum amount per day. |
21:34:48 | gmaxwell: | Eliel: turns out that general desktop users get really unhappy if their computer is busy, and they do actually turn them off, don't like the noise they make when not suspended, etc. (The reason the ltcscrypt has such a small size is because larger thrashed caches so bad that it hurt interactivity, even running as an idle background task, apparently; or so said art) |
21:34:55 | moa: | so anybody wondered about working an oracle into a mining algorithm? |
21:38:13 | BlueMatt: | amiller: yea, theres also that guy on the forum who seems to have a passion for connecting to everyone just because he can |
21:38:25 | amiller: | BlueMatt, who? |
21:38:29 | BlueMatt: | amiller: also, IIRC, (s)he doesnt do any procesing of data before relaying |
21:38:32 | op_null: | amiller: /Snoopy0.1/ |
21:38:36 | amiller: | oh yeah snoopy |
21:38:42 | BlueMatt: | no, not snoopy |
21:38:52 | BlueMatt: | gangnam style |
21:39:00 | BlueMatt: | /nogleg |
21:39:02 | kanzure: | andytoshi: check out page 2 second to last paragraph http://www.chroem.net/VAPUR.pdf (not sure if this is novel) |
21:39:06 | op_null: | there's at least 4 of them. one badly pretends to be Satoshi but mucked up the version name. |
21:39:19 | BlueMatt: | snoopy is the eth guys, no? |
21:39:43 | kanzure: | "use this hash of the request to determine which nodes will be responsible for arbitration" |
21:39:49 | helo: | moa: the oracle is a centralized point of trust by definition |
21:39:53 | kanzure: | oh that's not repeatable, hm |
21:39:57 | helo: | so it kinda depeats the purpose :) |
21:40:14 | moa: | not the oracle we are looking for |
21:40:28 | op_null: | BlueMatt: no, I don't think it's Ethereum related at all. |
21:40:47 | BlueMatt: | op_null: huh? nooooo, eth zurick |
21:40:53 | gmaxwell: | BlueMatt: well it's easy to make noleg vanish off the network, just relay a transaction with a bad signature to it and everyone bans it. |
21:40:53 | BlueMatt: | zurich, that is |
21:41:00 | BlueMatt: | gmaxwell: yes |
21:41:30 | BlueMatt: | op_null: tell me ethereum isnt trying to use the eth symbol :( |
21:41:43 | BlueMatt: | ever heard of googleing before selecting a name? |
21:42:01 | op_null: | BlueMatt: oh no, sorry I extended eth to ethereum. |
21:42:52 | op_null: | BlueMatt: but yes, they do use ETH as their shorter name for Ethereum |
21:43:00 | BlueMatt: | gmaxwell: someone should run a node behind some regular-changing-ip that relays invalid signatures to all nodes with strange version messages |
21:43:25 | BlueMatt: | gmaxwell: ie someone in .de behind deutsche telekom who gets a new ip every 24 hours whether they like it or not (not sure if they still do that...they used to) |
21:43:33 | op_null: | BlueMatt: people might have tried that already :> |
21:44:45 | op_null: | not on a wide scale, just to see if a particular node relays them (it didn't) |
21:47:11 | kanzure: | this section of http://www.chroem.net/VAPUR.pdf seems very likely to be broken or wrong: "In other blockchain implementations, nodes creating new blocks are free to put whatever content" |
21:47:50 | kanzure: | something about not allowing new peers unless existing peers agree? |
21:49:04 | kanzure: | *existing peers agree to allow the new peer |
21:50:11 | kanzure: | i don't think "reverse sybil attack" is quite the right name for "an arbitrary compatible history" |
21:52:17 | gmaxwell: | kanzure: so hard to not just completely ignore things that can't bother to get even the most simple details right. |
21:53:55 | op_null: | gmaxwell: doesn't stop that sort of thing from getting funding though, nobody else can see the red flags :( |
21:54:07 | bramm: | Hey everybody |
21:54:13 | tromp_: | hi, Bram |
21:54:15 | kanzure: | oh had i known that they received funding i wouldn't have bothered looking |
21:54:24 | BlueMatt: | we need a pre-funding #bitcoin-wizards arbitration system |
21:54:26 | bramm: | Oh hey Tromp, I wanted to ask you some questions about cuckoo |
21:54:33 | tromp_: | please do |
21:54:49 | bramm: | First, why is the graph bipartite? How does that matter? |
21:55:11 | tromp_: | to simplify the algorithm |
21:55:12 | moa: | BlueMatt: you could contract the out even |
21:55:14 | op_null: | kanzure: careful, I don't know anything about that project and didn't say they'd got funding for it. just in general things with technical red flags aren't picked up by the people who might fund them. |
21:55:25 | Dizzle__: | Dizzle__ is now known as Dizzle |
21:55:42 | bramm: | But it could work fine on a non-bipartite? |
21:55:45 | tromp_: | otherwise i have to deal with self-loops |
21:56:02 | bramm: | Oh right, that makes sense |
21:56:13 | tromp_: | but it also simplifies the trimming |
21:56:21 | tromp_: | which can alternate between the two sides |
21:56:54 | tromp_: | finally, the tmto algorithms that use a breadth first search would get significantly more complex |
21:57:24 | bramm: | So the overall view is that it requires N memory, and basically one pass over the whole thing with O(1) random lookups for each element? |
21:57:51 | tromp_: | no, there's many passes (rounds of trimming) |
21:58:04 | tromp_: | the basic algorithm is single pass though |
21:58:23 | tromp_: | and #passes is still constant |
21:58:23 | bramm: | I think I understand the basic algorithm, I don't understand the multiple passes thing, it doesn't seem to be in the white paper |
21:58:37 | tromp_: | yes, it's the edge trimmming section |
21:59:33 | tromp_: | although it doesn't say much on number of rounds, i think |
22:00:21 | tromp_: | the trimming just has to get the fraction of edges down to like 2% |
22:00:33 | bramm: | Oh I see, I assumed that that was just as single pass, I'll have to process it |
22:01:00 | tromp_: | no, trimming is in fact the majority of runtime (>98%) |
22:01:26 | bramm: | So when you're done trimming everything remaining is in loops and it's a matter of finding a loop of the right length? |
22:01:35 | tromp_: | no, many paths remain |
22:02:03 | tromp_: | trimming just cuts of edges near leaves |
22:02:10 | tromp_: | cuts off |
22:02:25 | tromp_: | which happens to be the vast majority of edges |
22:02:29 | bramm: | How can there be any nodes which aren't part of a loop if all terminal nodes have been removed? |
22:02:53 | tromp_: | there's like a dozen rounds of trimming |
22:03:14 | tromp_: | it doesn't remove all leaf edges |
22:03:37 | tromp_: | doing that would require thousands of trimming rounds |
22:03:44 | tromp_: | since you can have such long paths |
22:03:49 | bramm: | Oh right |
22:04:09 | tromp_: | you just switch to the basic algorithm when you can afford to (memory wise) |
22:04:27 | tromp_: | since the basic alg is very efficient at identifying loops) |
22:05:28 | tromp_: | did you run the code? |
22:05:44 | bramm: | No I've just read the paper and am absorbing it |
22:05:58 | bramm: | I'm struck by how special having an average fanout of 1 is |
22:06:16 | tromp_: | just git clone and make test, it may be instructive to see it in action |
22:07:01 | tromp_: | yes, it's on the threshold between having no and having tons of cycles |
22:07:30 | bramm: | Less than that and there are no loops, more than that and they're easy to find |
22:08:05 | tromp_: | classic S curve |
22:08:09 | Dizzle__: | Dizzle__ is now known as Dizzle |
22:09:38 | bramm: | Also all the potential crypto issues people were talking about yesterday are a non-issue. If you secure hash the input before using it to generate nonces then all you're relying on is that hashing for security |
22:10:16 | bramm: | I'm much more concerned about the possibility that there might be some clever algorithm which might get rid of all the random lookups |
22:11:37 | tromp_: | yes, i find it puzzling that zooko thinks there's anything to be attacked in the internal hash |
22:12:28 | tromp_: | algorthmic improvements are the only likely problem |
22:12:34 | bramm: | I need to get lunch, be back in a minute with more questions |
22:13:07 | tromp_: | in the worst case cuckoo just reduces to having to compute a few billion siphashes |
22:21:43 | bramm: | Yes but what's really interesting is the random memory accesses |
22:22:26 | tromp_: | yes, those are hard to avoid |
22:22:30 | bramm: | We can get the memory usage with a simpler scheme which has a smaller size of proof |
22:22:55 | tromp_: | what do yo mean? |
22:23:28 | bramm: | Just using 4sum like I suggested will have a proof of work which is less than 200 bits instead of more than 1000 |
22:24:21 | bramm: | So, here's my thought as to an algorithm for trying to do cuckoo faster, primarily worrying about avoiding random memory accesses: |
22:25:08 | tromp_: | why wld you want to avoid rnd accesses? |
22:25:44 | bramm: | As an attacker, trying to do the proofs as quickly as possible, assuming that the random accesses are the expensive thing |
22:26:52 | bramm: | I mean, on the implementation side we want to do as few random accesses as possible to make things fast. I mentioned that shorter proofs thing because *if* it's possible to completely avoid the random accesses in implementation then there are some other proofs of work which might be preferable |
22:26:54 | tromp_: | cuckoo aims to make rnd access unavoidable |
22:27:07 | tromp_: | and thus makes the pow more power-friendly |
22:27:24 | bramm: | Right, we're in agreement on this |
22:27:43 | bramm: | So here's my thought about a way of finding the loops which might avoid a lot of the random accesses: |
22:28:35 | bramm: | First we make a list of all nodes which it's possible to reach after exactly one hop, and a back pointer for each from where it came from |
22:28:49 | bramm: | Then we sort these |
22:28:59 | tromp_: | bramm., did you read the sections on TMTO algorithms? |
22:29:53 | tromp_: | you're proposing a variation on those. but they're not memory competitive with the trimming algorithm |
22:30:21 | bramm: | Then we make a new list of all the nodes which can be reached after two hops, again including back pointers, and not allowing backtracking to get here, and we again sort that |
22:30:54 | bramm: | repeat a certain number of times until the number of nodes left in the list is fairly small, then run the general algorithm |
22:31:34 | bramm: | Most of the paper's words have passed before my eyes, that doesn't mean I've understood them all :-) |
22:32:05 | tromp_: | you're proposing to speed up the TMTO algorithms by sorting |
22:32:33 | bramm: | Yes, because sorting avoids random accesses |
22:33:17 | tromp_: | note that those algorithms are over 20x slower to begin with |
22:33:31 | bramm: | Well that could kill it right there |
22:33:43 | tromp_: | because they dont use memory as efficiently |
22:34:00 | tromp_: | you can try modify the source code that i provide |
22:34:16 | tromp_: | you can even win a bounty if you speed up the tmto's substantially |
22:35:43 | tromp_: | btw, note that my trimming algorithm also does some bucket sorting of accesses |
22:36:29 | tromp_: | but the acccesses are still many cache lines apart |
22:36:32 | bramm: | Yeah it might be a very similar thing, I'll have to spend more time thinking about it |
22:37:10 | bramm: | You can speed up bucket sorting by having separate near and far groupings, so you wait until there are multiple things to put into a far bucket before actually putting stuff there |
22:37:18 | tromp_: | so it's not like you're gonna get nice consecutive accesses |
22:40:11 | bramm: | Do you have any idea how to get a closed form formula, or even non-monte-carlo approximation, for the chances of there being a loop of a given length? |
22:41:33 | tromp_: | no, i don't |
22:41:46 | tromp_: | except for length 2 |
22:43:23 | tromp_: | but the expected # of such loops should be easy to derive |
22:44:21 | tromp_: | just using linearity of expectation |
22:45:50 | Dizzle__: | Dizzle__ is now known as Dizzle |
22:48:10 | tromp_: | well, good news: cuckoo cycle was accepted for BITCOIN 2015 |
22:51:51 | bramm: | Good to hear |
23:00:02 | zooko: | Yay! |
23:00:03 | bramm: | Okay I understand the trimming now but don't follow why it speeds things up. Doesn't it still have to do a random lookup per edge? |
23:00:47 | bramm: | zooko, if the input to cuckoo is run though a secure hash before being used your concerns about cryptographic security within the core don't matter |
23:01:06 | bramm: | I mean, it's trivial to generate strings which end in a bunch of zeros, we can still use those for proofs of work. |
23:01:10 | tromp_: | the trimming is not a speedup but a big memory savings |
23:01:28 | tromp_: | the basic algorithm takes 32 or 64 bits per edge |
23:01:38 | tromp_: | the trimming only takes 1 bit |
23:01:41 | bramm: | Oh right, because the limiting factor on cost is memory? |
23:01:50 | tromp_: | yes |
23:01:53 | bramm: | I see |
23:03:34 | zooko: | If we could figure out how to generate 4 edges from a single blake2s invocation, that would be about twice as efficient as generating 4 edges from 4 siphash-2-4 invocations... |
23:03:54 | bramm: | zooko, It doesn't matter, blake2 wins you no security over sip hash |
23:04:16 | tromp_: | zooko, just make a narrow blake2 of width 64 bits |
23:04:18 | zooko: | I don't understand why you say that. |
23:04:28 | zooko: | tromp: Yeah, that would be fine, and it would be about half as efficient as SipHash-2-4. |
23:04:59 | bramm: | zooko, because the existence or not of a cycle is entirely dependent on the security properties of the secure hash which you ran your input though before generating the nonces |
23:06:23 | zooko: | * zooko casts Summon lmgoodman |
23:08:13 | tromp_: | i think these discussions are going nowhere and we just have to agree to disagree on whether attacking cuckoo through lack of security of siphash is at all conceivable |
23:09:57 | tromp_: | i dont even understand what bramm just said:( |
23:10:03 | gmaxwell: | if it's security irrelevant, replace it with the identity function or x*2862933555777941757+3037000493 mod 2^64 |
23:10:10 | gmaxwell: | thats much faster than siphash. |
23:10:12 | zooko: | I would be happy to try generating a more precise argument, but not right now. |
23:10:17 | zooko: | gmaxwell: ;-) |
23:10:36 | gmaxwell: | (and less code too) |
23:12:02 | bramm: | gmaxwell, There's a later step of finding the loop, if the function is *too* simple, there may be shortcuts to finding the loop. In particular for a modular function like you propose it would probably have very long chains for trivial mathematical reasons |
23:12:41 | bramm: | That said, sip hash is probably overkill - each step of loop finding is analogous to a single feistel round, and sip hash is much stronger than that |
23:13:40 | bramm: | tromp_, My point is, if an attacker can put any random stuff directly into the keying of the sip hash which forms the challenge then I'm a bit worried about attacks. If you sha256 it first there's nothing left. |
23:15:00 | tromp_: | i alrd told zooko he should not hesitate to replace that sha256 step by blake2 :) |
23:15:06 | bramm: | Any attacks which are on *finding* the loop have very limited use of cryptography, because the cryptographic attack would have to outperform the alternative, which is to just do the cuckoo algorithm, which is by design fairly easy |
23:15:41 | gmaxwell: | "analogous to a single feistel round" is bordering on technobabble. The point I was making was that the function has security considerations but they're being swept under the rug here. (and the function I gave intentionally has a period of ~2^62 ... which you may perhaps find to be suboptimal. :P it wasn't a serious proposal) |
23:16:46 | gmaxwell: | again, if any attack is a non-consideration, then a trivial function is sufficient. |
23:16:59 | tromp_: | gmaxwell: the period is of no interest in cuckoo since there're no repeated hasing |
23:18:02 | tromp_: | just a mapping to take a nonce to an edge endpoint |
23:18:43 | bramm: | gmaxwell, The only relevant question is whether you can find a 42nd pre image in less than 10 seconds |
23:19:20 | bramm: | That's basically what an attack would boil down to. Anything where you're confident in that and you're fine. Which sip hash is. |
23:20:35 | gmaxwell: | According to what research? ... and why do you assume that this is the only interesting attack? What happens if an attacker is able to implement the function 100x more power efficiently? |
23:21:35 | bramm: | gmaxwell, Power efficiency doesn't matter because the CPU is sitting around bored |
23:21:58 | Luke-Jr: | lol |
23:22:08 | bramm: | gmaxwell, and 'a single round of feistel' isn't technobabble, the fact that you think that shows that you're unfamiliar with the design of block ciphers |
23:22:23 | Luke-Jr: | CPUs have sat around bored for like a decade |
23:22:26 | Luke-Jr: | haven't* |
23:22:51 | gmaxwell: | I'm familar with the design of block ciphers; It's technobabble because it says absolutely nothing about security. I know what the words mean, I don't believe that you do, however. |
23:23:25 | bramm: | There are common expectations for what a single round of a feistel cipher does, and it's less than siphash |
23:24:06 | bramm: | Honestly I can't justify something which has symmetric cipher properties if you dismiss terminology from symmetric ciphers as technobabble |
23:25:13 | gmaxwell: | Sure and a modern block cipher is analyized for its properies overall and in reduced round constructions. An arbritary function stuck into a set of lifting steps does not magically make for a block cipher that meets any particular security requirement... nor to I have any reason to think that block cipher security behavior is at all relevant for a POW. |
23:25:34 | gmaxwell: | I'm just seeing a lot of irresponsible design work in here today. It's disappointing. |
23:25:39 | moa: | probably need a tighter definition of technobabble here |
23:26:12 | bramm: | gmaxwell, Have you read through how cuckoo works? The vast majority of crypto concerns one might have simply don't apply |
23:26:53 | bramm: | starting with that doing a crypto attack when there's a perfectly normal proof of work as the intentionally designed immediate alternative is ridiculous. This isn't a block cipher. |
23:27:02 | gmaxwell: | I read an earlier version of Trom's paper on it. I have a general understanding of it. |
23:27:23 | gmaxwell: | bramm: Glad that you've realized that it isn't a block cipher. |
23:29:23 | gmaxwell: | The security concerns are different from a block cipher. Internal structure in a work function has many times in the past lead to surprising optimizations. One of the special challenges of work functions is that small optimization factors (like <10x) can still have a huge effect, which is unlike most cryptographic questions where we mostly care about asymptotic differences. |
23:29:36 | bramm: | I really don't understand your point. Would you have more confidence in my proposal to use 4sum? That has a much stronger mathematical backing. |
23:30:15 | bramm: | gmaxwell, we were just discussing the possible optimizations of implementation of cuckoo, none of them have to do with the prng, that isn't the weak point |
23:31:02 | gmaxwell: | I think it's very concerning that in one breath it's argued that the structure of the internal hash function is irrelevant to security; and then in the next, replacing it with truly trivial (e.g. linear or identity) function is not. So whats the security criteria one function passes that the other does not? |
23:31:42 | gmaxwell: | bramm: someone optimizaing doesn't care what you think is weak they'll use any and all options, and to be secure a function must resist all of them; especially the ones you didn't think of. |
23:31:44 | bramm: | It's lacking trivial modular mathematical properties. The sort of things which sip hash is designed specifically to not have |
23:32:52 | bramm: | sip hash is meant to be suitable as a keying algorithm in a hash table when an attacker is controlling the inputs, to continue to have even distribution regardless of what the attacker does. That is exactly the property which cuckoo is relying on |
23:33:32 | bramm: | You also haven't answered my question about using 4sum |
23:33:43 | gmaxwell: | Which properties, specifically, are required? So you're saying uniformity given attacker controlled but distinct inputs? even though the attacker knows the function? |
23:34:21 | gmaxwell: | bramm: the underlying problem is more or less irrelevant to the specifics of optimization; assuming it's well studied in general. |
23:34:56 | gmaxwell: | Zooko made the argume above that blake2's uniformity properties had a ton of peer review and this was previously dismissed as inconsequential. |
23:35:33 | bramm: | gmaxwell, 4sum is extremely well studied. And the number of inputs into it is small enough that using a cryptographically secure hash for it wouldn't cost a significant amount of cpu |
23:35:52 | bramm: | not inconsequential, overkill |
23:36:31 | bramm: | And I mean seriously, are you even aware of how many layers of stuff you'd have to break through to attack cuckoo based on the crypto, and how easy the same computations are to do just by doing them? |
23:36:53 | gmaxwell: | (I'm mostly just continuing zooko's argument. My own concern are higher level; and more meta. E.g. concern about the wisdom of the specific goals, which reasonable people can disagree on, and concern with what sounds like a very sloppy and dismissive attitude around security) |
23:37:44 | bramm: | gmaxwell, this isn't sloppy. Cuckoo is a coherently thought out and specified primitive. It's being published to be studied. It's based on well known primitives and problems |
23:37:50 | gmaxwell: | bramm: I think you are showing a remarkable lack of awareness at how easy it is to screw up a part of a cryptosystem. Again I repeat, you fundimental advance is needed in any particular area to turn out to have a design which can be substantially optimized. |
23:38:21 | gmaxwell: | bramm: yes, the work here I introduced you to less than 24 hours ago, if you might recall. |
23:38:23 | bramm: | gmaxwell, You're making a highly general statement where I'm making a very specific statement about a very specific thing |
23:39:09 | adam3us: | tromp: your restatement of andytoshi's claim that asic resistance is not desirable seems fair to me. (that there exists negligible perf difference between general purpose hardware that could be interesting). however it seems generically impossible. "hardware wins"? |
23:39:52 | gmaxwell: | Becuase I haven't gone out and broken it yet. An example is scrypt that has some nice security proofs even, ... and then in litecoin when the rubber met the road the gpu implemenations were able to get big TMTO gains and produce high performance implementations that were previously claimed to be impossible. |
23:41:27 | adam3us: | tromp: and it seems desirable to have cheap hardware not expensive hardware so that we get closer to the objective of having the proof of cost be predominantly electrical cost (not amortised equipment cost) |
23:41:44 | gmaxwell: | bramm: so what I'm saying is that the dismissve approach of "construct X is well studied, and Y is well studied, and Z is widely used, all in areas unrelated to POW" doesn't mean that a particular composition of them is secure. I recommended talking to tromp because he's basically the only person in this space even trying to do a good job. (and AFAICT doing so, within the confines of the goal he's adopted and the limitations of a ... |
23:41:50 | gmaxwell: | ... single person for a fairly short amount of itme) |
23:41:50 | bramm: | gmaxwell, I'm very concerned about such things for cuckoo, but like I said before the problems aren't in siphash |
23:42:18 | bramm: | ... I am talking to tromp |
23:42:34 | gmaxwell: | bramm: perhaps it's not the weakest link sure. That I can buy. I wouldn't have brought it up myself, other than the fact that I expirenced some horror at zooko expressing concern and being dismissed. |
23:43:04 | gmaxwell: | I know you are. I pointed that out because the prior sentence sounded like I was dismissing his efforts, which wasn't my intention. |
23:43:45 | bramm: | gmaxwell, My dismissiveness of what zooko is saying is over his contention that it may be possible to manipulate and insert loops or something like that, which it clearly isn't, because of the cryptographic properties of the hash function used at the beginning |
23:44:05 | bramm: | I mean, there are possible attacks, but not there |
23:46:01 | coke0: | What is the best case scenario with ASIC resistant PoW? So for a while you have more de facto decentralization due to many people having low marginal cost in CPU cycles. The minute mining becomes a bit big, people will be happy to rent their CPU power to a pool. In general, cheap CPU cycles are an anomaly bound to be arbitraged away in the future |
23:46:04 | gmaxwell: | bramm: How are you getting to that reduction? e.g. if I grind the initial hash, if the interior portion is sufficiently weak I may be able to very rapidly produce solutions. Keeping in mind that there is something like a 20,000:1 difference is hashes/$ for sha256 between a general cpu and dedicated hardware (and considerably more in terms of H/joule). |
23:47:00 | bramm: | tromp_, My thought is that there may be some tricks which can reduce random access lookups in the trimming phase: First bucket sort as many edges as you can fit in near cache, then update those in memory in order, then maybe you've done fewer far random accesses |
23:47:37 | tromp_: | bramm: that's what the current implementation does |
23:47:46 | gmaxwell: | coke0: right that is related to the botnet/ec2 concerns; e.g. million node botnets. It's hard to reason about what the implications are. They've been used to blow away some altcoins in te past. |
23:47:56 | tromp_: | using L2/L3 cache for the buckets |
23:48:10 | gmaxwell: | coke0: but thats a space thats hard to reason about. |
23:48:20 | bramm: | gmaxwell, If the inputs are secure hashed first, then there's nothing an attacker can do to increases or decrease the chances than any particular input they try actually contains a cycle, because cryptographically secure |
23:48:54 | coke0: | less cryptography, more game theory |
23:49:02 | bramm: | The borg is coming to get us! |
23:50:09 | bramm: | gmaxwell, Now determining whether there is a cycle and finding it, that's another story |
23:50:26 | tromp_: | adam3us,coke0: i don't know what all the implications are of shifting power costs to equipment costs. but i feel that narrowing the custom/commodity performance gap is desirable |
23:51:22 | tromp_: | and beneficial for decentralization |
23:51:56 | Taek: | I also don't fully understand why high ongoing costs (electricty) would be desirable vs. high equipment costs |
23:52:38 | bramm: | tromp_, How do you know the size of the near cache? |
23:52:41 | adam3us: | tromp: this (narrowing range of equipment advantage) was the original motivation of the memory bound functions (memory bandwidth limited) that memory latency was less varying than cpu power. |
23:52:45 | gmaxwell: | Taek: because equipment costs are amortized. E.g. you can see them as a centeralization effect. Now, I'm not trying to argue that this consideration breaks anything in particular. |
23:53:06 | tromp_: | bramm: you'd recompile for different platforms on which you run cuckoo |
23:53:12 | gmaxwell: | It's just a consideration which I don't well understand at this point. |
23:53:25 | tromp_: | bramm there's a few #define's for setting bucket sizes |
23:53:29 | moa: | Taek: electricity is already widely available, expensive hardware not |
23:53:34 | bramm: | Ah, gotcha |
23:54:04 | coke0: | tromp_: I tend to agree, it buys a few years, but it feels like a bandaid |
23:54:29 | gmaxwell: | tromp_: an argument Luke-Jr was trying to make earlier was that sha256 grinding asics are are already a commodity. Not as much as standard dram, but people are doing 20 million dollar mfgr runs of them, you can buy them by the reel fabricated on state of the art process. Just something to think about. |
23:54:32 | tromp_: | adam3us: also latency is a lot less varying than memory bandwidth |
23:54:35 | zooko: | Okay, I have another thing to offer on the "hash function in cuckoo" topic. |
23:54:42 | zooko: | I hope it is not unwelcome. |
23:54:45 | zooko: | ☺ |
23:55:03 | zooko: | It would be possible to naively assume that a successful attack on cuckoo would have to take one of two forms: |
23:55:04 | Taek: | hmm. moa: shipping is pretty cheap, but centralization of manufacturing facilities could be an issue |
23:55:24 | zooko: | a) get a random graph, find a better optimization for finding a cycle in it |
23:55:28 | adam3us: | tromp: if costs are shifted to equipment, do we know that the cost is linear in performance. an old example is the people doing the memory bound pow stuff back in 2002 would probably be surprised by CPU cache sizes these days. |
23:55:46 | zooko: | (including heuristically, i.e. your better optimization can choose to give up and go to a new random graph, if that helps) |
23:55:47 | tromp_: | gmaxwell: but they're not a commodity that hold value well over a f ew years, unlike DRAM |
23:55:57 | gmaxwell: | If it does turn out to be the case that specalized CRAM or on-chip-via something another is several times more cost effective for cuckoo then I'd worry that even if the gap was small expirence says all the non-specialized hardware is out of business, and that new hardware may be less commodity than sha256 grinding asics. But this is somewhat in the realm of speculation. |
23:55:59 | kanzure: | if there are truly no particular manufacturing optimizations to make for DRAM then would homebrew DRAM manufacturing work |
23:56:01 | zooko: | b) find a way to trick cuckoo into generating a random graph that has a cycle sitting there for you on a silver platter. |
23:56:01 | tromp_: | and they're totally single purpose |
23:56:06 | adam3us: | tromp: or hardware hackers are more creative than software give them credit for. |
23:56:34 | zooko: | But, this simplistic dichotomy would overlook possible attacks on cuckoo which involve biasing the graph in some way so that a (heuristic) cycle-finder can be more efficient. |
23:56:51 | tromp_: | adam3us: cuckoo can scale memory use dynamically |
23:57:18 | zooko: | Including the possibility of parallel approaches, i.e. the thing I suggested yesterday where you generate many biased graphs with some relation to one another, in order |
23:57:32 | zooko: | to make your cycle-finder able to achieve superlinear cycle-finding in linear RAM. |
23:57:38 | zooko: | (And of course superlinear computation.) |
23:57:58 | gmaxwell: | zooko: it's not clear to me how such an approach actually fits into the application, since each run of this is initilized with its starting conditions. |
23:58:17 | zooko: | So, if you argue that there's no plausible way that a weakness in cuckoo's internal hash function could lead to a cycle sitting there for you on a silver platter, that's not a proof that there isn't some other successful attack. |
23:59:45 | zooko: | gmaxwell: my understanding of cuckoo may be incomplete... I think that "the attacker", i.e. the miner, has to build on top of a given input (e.g. the hash of the current block), and then gets to choose arbitrary nonces to mix in. |