00:04:15 | andytoshi: | justanotheruser: probably you want to broadcast random data so that even if there is no traffic it is hard to do analysis |
00:06:07 | brisque: | andytoshi: chaffing your connection would be difficult though. if you limited yourself to 0.3kb/s that's a ceiling you can't easily go past. as soon as you go above that it's fairly apparent that you're doing /something/. same issue as before. |
00:06:51 | brisque: | andytoshi: if you ramp up the chaff data to a useful level, you're suddenly burning terabytes a month for no real purpose. |
00:07:59 | andytoshi: | yeah, maybe there is a way to have chaff based on a rolling average of actual data usage |
00:08:03 | c0rw1n: | do ty _have_ to send your message in full? or could you be sending it .3kbps at a time? |
00:08:33 | jron: | after a over an hour, the most interesting quote to come out of the hearing has been: "...I think the level of engagement and the positive reception that bitcoin companies are now getting from certain banks has lead us all to believe that we're very very close to the banking industry opening up to bitcoin. I think we're probably 2 or 3 months away from some well known banks coming out with kind of clear procedures on how to work with them as a bitc |
00:10:03 | sipa: | with them as a bitc[...] |
00:11:10 | jron: | with them as a bitcoin company and they'll position themselves as a bitcoin friendly bank." - Barry Silbert |
00:11:32 | andytoshi: | brisque: there is an example of the uncertainty principle for fourier transforms involving water waves, where you can't simultaneously determine the waves' frequency and breadth or something like that. using that idea you can smear out the actual changes in traffic volume |
00:11:39 | andytoshi: | i'll see if i can find that.. |
00:16:42 | jron: | oh, and that someone is trying to remake e-gold\goldmoney. |
00:28:14 | andytoshi: | brisque: i can't find it, but i did find a paper by folland called "uncertainty: a mathematical survey" which gave the formulation that i wanted: there exists some number (1/16pi or something) which bounds below the product of your variance and your fourier transform's variance. for waves this means you can measure the water height arbitrarily well, or the wave frequency arbitrarily well, but not |
00:28:17 | andytoshi: | both |
00:28:22 | andytoshi: | (though ofc you can just do two measurements) |
00:29:09 | andytoshi: | so the better you keep your chaff quantity following a sine wave, the worse time an attacker will have determining the actual data level |
00:30:25 | andytoshi: | since the attacker can only measure frequency in that case, he can't measure actual bandwidth without knowing what's real and what's not |
00:32:11 | andytoshi: | then for example if you can always keep your bandwidth uncertain within +/- 10kb/s, and don't increase the amount of chaff by more than 10kb/s/day, an attacker can only see changes in bandwidth usage with granularity of one day, thus defeating timing analysis |
00:33:21 | brisque: | hm. I've never believed that random timings and fake data really help to secure a service. if you're running something like RetroShare you're probably going to need to be attracting a lot of attention to yourself for anybody to bother doing traffic analysis. if they are, you can assume they're probably just going to get a warrant and bust your door down. |
00:34:35 | jrmithdobbs: | andytoshi: the shorter way of saying that is "run a bandwidth restricted tor relay on the same link" |
00:35:24 | andytoshi: | jrmithdobbs: yeah :} and randomly change the bandwidth cap |
00:35:49 | jrmithdobbs: | but i'm with brisque, I'm not so sure I buy shamir/etc's arguments on this topic |
00:36:39 | jrmithdobbs: | there's analysis to be done there but it's kind of like plugging the whole in a rowboat with your finger whene there's 500million more holes |
00:36:47 | jrmithdobbs: | s/whole/hole/ |
00:37:22 | super3: | my question is what is the minimal amount of fake data you can throw around without just wasting bandwidth |
00:37:49 | super3: | i like the idea of just using it in random brusts rather than continual data usage. |
00:37:50 | jrmithdobbs: | and I don't think we really have a correct answer yet but i've not specifically read the paper andytoshi mentioned :) |
00:38:00 | brisque: | even a kilobyte a second adds up, especially over multiple peers. |
00:38:05 | super3: | where is this paper? |
00:38:35 | super3: | brisque, also makes you stand out on a network. |
00:39:19 | justanotheruser: | andytoshi: yes, every N seconds you broadcast data |
00:39:35 | justanotheruser: | to all your peers |
00:39:54 | jrmithdobbs: | super3: in fact, i'm not sure anyone's actually looked for *generic* traffic, the only stuff I'm recalling specifically involve using spam/smtp as transport |
00:40:31 | andytoshi: | jrmithdobbs: yeah, that's an example of what i'm saying about using periodicity to hide actual volume. so if you burst every second, and increase traffic whenever you need it, your attacker can see your volume changing with 1-second granularity |
00:41:05 | andytoshi: | i guess that's way way simpler than trying to shape continuous traffic to have decently periodic features.. |
00:41:47 | brisque: | super3: like the guy who puts way too many locks on his door. |
00:42:09 | super3: | brisque, im that guy |
00:42:22 | super3: | brisque, rather too many locks than not enough |
00:42:30 | jrmithdobbs: | andytoshi: if anything normalizing like that may obscure the original intent but has the side effect of calling attention to the traffic because NOTHING is that normal |
00:43:17 | brisque: | super3: locks seem a little silly when people have glass windows. |
00:43:30 | jrmithdobbs: | heh |
00:43:49 | jrmithdobbs: | tbqh, having the lock makes the lock do it's job |
00:43:52 | jrmithdobbs: | don't even have to lock it |
00:44:07 | jron: | here is the e-gold like company the lawyer refered to: http://www.coeptis.com/ |
00:44:57 | jrmithdobbs: | (in fact, i rarely do, lol) |
00:46:32 | jrmithdobbs: | super3: it's actually quite a fitting analogy |
00:46:58 | jrmithdobbs: | super3: you do realize that 98% of locks on the market can be opened in <15s with basically a week's worth of effort, right? |
00:47:30 | jrmithdobbs: | and said effort isn't salted so effort on one core of a similar type equates to effort on another core of the same design with different keying |
00:47:39 | super3: | jrmithdobbs, i agree with you |
00:48:16 | brisque: | locksport is great fun. |
00:48:31 | jrmithdobbs: | great party trick if nothing else |
00:49:05 | jrmithdobbs: | (and the "week's worth of effort" was from zero knowledge of how they work, not per core, to be clear ;p) |
00:49:31 | brisque: | I enjoyed the opening contests at defcon too, they even had casascius coins (to keep the comment on topic) |
00:50:10 | jrmithdobbs: | i like freaking out locksmiths |
00:51:03 | jrmithdobbs: | had one try and upsell me on some padlocks towing something recently, "Ya see this, this is so thieves can't get a pick in here" "what? yes you can, look: " ... |
00:51:08 | jrmithdobbs: | he almost called the cops, lol |
00:51:20 | jrmithdobbs: | (because said tools are illegal in tx unless licensed) |
00:52:33 | brisque: | well, be careful. fine line between a party trick and freaking people out. |
00:53:32 | jrmithdobbs: | it's more fun not to pick the locks and show people the releases on filing cabinets/etc instead ;p |
00:53:55 | jrmithdobbs: | *that* freaks people out .. noone thinks about this stuff, ha |
00:55:13 | maaku: | there's a great story about feynman 'picking' the combinations of his colleages safes in the manhatten project |
00:55:52 | gmaxwell: | where he went and precomputed the combinations and then appared to be able to do it instantly? :P |
00:56:11 | gmaxwell: | or was that where there was some bypass? |
00:56:50 | brisque: | combination locks are usually the easiest ones, all you need is a drink can and a pair of scissors. |
00:58:11 | jrmithdobbs: | gmaxwell: that sounds like a fun story hadn't heard it |
01:00:08 | gmaxwell: | one of the puzzles in this years MIT mystery hunt, part of the runaround at the end, was a pin-tumbler lock in the form of a pool-table sized 'bed'. (you had the manipulate slats on the sides of the bed to pick the lock, after first solving some nested trick with a magnetic trigger) |
01:01:19 | gmaxwell: | people in my team were kinda mobbing the bed and preventing effective work on it— someone called out "who here has picked a lock before" and 3/4 of the room, including everyone within 10 feet of the bed raised their hands... so that wasn't a good distinguisher on who should be taking the lead... |
01:01:29 | maaku: | gmaxwell: correct, iirc he was able to feel the last two (of three) numbers on an opened safe, and most people kept their safes opened while they were in the office |
01:01:36 | maaku: | jrmithdobbs: it's in "surely your joking?" i think |
01:12:30 | Emcy: | anyone ever picked those eletronic dongle locks |
01:12:44 | Emcy: | the ones where the key sort of looks like a coin cell |
01:21:33 | gmaxwell: | Emcy: I believe I've seen those in the form of the 'keys' used for segways. |
01:21:52 | gmaxwell: | I'd _assume_ they're cryptographic. |
01:22:24 | brisque: | I wouldn't. I expected the one for my car to, but it just uses rolling codes like all the rest. |
01:23:01 | brisque: | if you really want to piss somebody off, go out of range of the car and punch the unlock button a few hundred times. once the keyfob rolls past the acceptable window for the car, it's useless. |
01:24:02 | Emcy: | therye rfid |
01:24:41 | Emcy: | i used to have one for my dorm door.....the lock seemed to have a nifty internal power source |
01:24:57 | brisque: | oh wait, there's stock standard RFID tags probably. I saw a store stocking them. |
01:25:16 | gmaxwell: | brisque: I know that some of the car ones are actually cryptographic because they've used snakeoil crypto that people have successfully attacked! (doh!) |
01:25:29 | Emcy: | and i once read something about those lock systems being able to form thier own sneakernet via a writable area of the rfid and keep logs of when and who opens doors etc |
01:25:35 | sipa: | gmaxwell: a friend of mine at university did :) |
01:25:56 | sipa: | (keeloq) |
01:26:37 | gmaxwell: | Emcy: I like the electronic locks that look like dial combination locks where spinning the dial powers it. |
01:27:25 | Emcy: | never seen that |
01:27:40 | gmaxwell: | They're pretty insanely secure because the only connection between the outside and inside is a couple wires, and all the locking is on the inside. |
01:27:52 | brisque: | gmaxwell: wonder how big a mechanical lock that used EC would be. it's presumably possible to make a mechanical computer that could do it, just it would be a little on the large side. |
01:27:57 | gmaxwell: | about the best attacks on a well built one are bugging the dial. |
01:29:38 | brisque: | bombe style with electromechanical calculation? |
01:32:05 | Emcy: | yep locks are pretty interesting |
01:32:42 | brisque: | likely impossible, but I'd pay big money to see a purely mechanical computer doing a SHA256 hash. |
01:33:09 | gmaxwell: | does it have to do the whole hash? :P |
01:33:54 | sipa: | i think being able to find 32 bits of it is not significantly easier |
01:34:05 | sipa: | well, even 1 bit |
01:34:15 | gmaxwell: | well I meant the whole function. |
01:34:26 | sipa: | you mean just a single compression function? |
01:34:26 | gmaxwell: | making a mechnical computer to compute one round wouldn't be that terrible. |
01:35:34 | brisque: | wouldn't be very impressive though |
01:41:01 | Emcy: | people make stuff like that in minecraft |
01:41:32 | Emcy: | it probably counts as emulation, but you can see the signal travelling in the 'data lines' so its close enough |
01:41:47 | gmaxwell: | I think we're not exposing people to enough really awesome ideas such that they think spending their time making ALUs in minecraft is a good way to have fun. :P |
01:41:57 | Emcy: | i think someone made a full 16 bit FLU+ALU+registers and etc out of blocks |
01:42:23 | Emcy: | how is that not fun |
01:43:15 | brisque: | fairly time consuming for the result |
01:46:34 | Emcy: | either ALUs or this http://www.youtube.com/watch?v=afcudstM9zA |
01:46:39 | Emcy: | also fairly time consuming |
01:46:44 | brisque: | gmaxwell: \o/ https://pay.reddit.com/r/Bitcoin/comments/1wfbjn/get_your_coins_out_right_away_alleged_weakness/ |
01:50:01 | brisque: | gmaxwell: oh, and a bigger one https://pay.reddit.com/r/Bitcoin/comments/1wf5qb/possible_warning_btc_addresses_with_known_public/ |
01:55:34 | andytoshi: | EnronIsHere helpfully explains "In cryptography, there is always a shortcut. Often very difficult to find but it's always there somewhere. That point really can not be stressed enough. |
01:57:01 | andytoshi: | i assume by "cannot be stressed enough" he means that his stressing program won't halt.... but he has no clue why since he doesn't believe in nonhalting programs :) |
02:06:50 | Emcy: | "stressing program wont halt" |
02:06:59 | Emcy: | life.exe |
02:17:18 | brisque: | andytoshi: this is a nice explanation too https://bitcointalk.org/index.php?topic=437220.msg4809894#msg4809894 |
02:18:00 | andytoshi: | ohh thx brisque i was wondering wth a "rendezvous point" was |
02:18:14 | brisque: | basically his precomputed keys, heh. |
02:45:08 | brisque: | I'm sure everybody has seen the person joining #bitcoin and spamming obscenities at the OPs. they're all listed on bitnodes.io as having been seen running a node at some point. |
02:45:11 | brisque: | is somebody seriously running a bitcoin-related botnet and spamming the channel with it? |
02:50:22 | brisque: | oh, actually. they're shared VPN addresses rather than a botnet. that's comforting. |
03:11:27 | super3: | is Luke-Jr around? i'm just about done with proof-of-pizz |
03:11:39 | super3: | proof-of-pizza* |
03:32:15 | twatcup: | twatcup has left #bitcoin-wizards |
03:46:53 | Luke-Jr: | we should all plan to plan out proof-of-steak at the Texas conference |
03:47:11 | Luke-Jr: | and announce it right at the end of the month |
03:47:20 | Luke-Jr: | maybe late by a day |
03:47:42 | justanotheruser: | Luke-Jr: have you heard of cyruscoin? |
03:47:48 | Luke-Jr: | no |
03:47:56 | justanotheruser: | It's based on proof of twerk |
03:48:57 | justanotheruser: | justanotheruser has left #bitcoin-wizards |
03:49:39 | tacotime_: | I'll be at the Texas conference, but I can't get behind proof-of-steak because I'm a pescetarian. :/ |
03:50:27 | Luke-Jr: | tacotime_: surely there is a seafood steak? |
03:52:51 | tacotime_: | I could do a salmon steak, I suppose. :D |
03:53:21 | tacotime_: | https://bitcointalk.org/index.php?topic=421842.msg4800547#msg4800547 |
03:53:30 | tacotime_: | He actually cracked a brainwallet privkey, heh. |
03:53:54 | Luke-Jr: | not hard, brainwallets are stupidly insecure |
03:55:25 | brisque: | tacotime_: brain wallet? it was probably created with his "weak key generator", which only generates keys of which he has the rainbow tables for. |
03:56:04 | super3: | ha ha. cyruscoin thats a new one |
03:56:07 | tacotime_: | Heh. |
03:58:36 | super3: | once its a little closer to the confrence ill find a good steakhouse(with some non-meat options too) and we can all go there |
03:58:59 | super3: | perhaps we can even pre plan and have them accept Bitcoin |
04:25:03 | Graet: | Graet is now known as Guest92094 |
04:26:29 | Guest92094: | Guest92094 is now known as Graet |
05:25:29 | tacotime_: | tacotime_ is now known as tt_zzz |
05:27:38 | justanotheruser1: | justanotheruser1 is now known as plipfishy1 |
05:28:11 | plipfishy1: | plipfishy1 is now known as plipfishy |
05:28:18 | plipfishy: | plipfishy is now known as justanotheruser |
05:37:08 | helo: | where in tx? |
08:33:07 | amincd: | amincd has left #bitcoin-wizards |
09:41:17 | gmaxwell: | http://xkcd.com/1323/ < tehehe |
09:46:15 | _ingsoc: | gmaxwell: Care to make a single statement on Ethereum? For the press! |
09:46:34 | _ingsoc: | (I'm kidding about the press) |
09:47:39 | gmaxwell: | meh. |
09:48:07 | _ingsoc: | Hahaha. I thought so much. |
09:48:13 | gmaxwell: | I'm happy to hear someone is exploring something different, really disappointed to see another group asking for millions of dollars for a bill of goods. The code posted so far is unimpressive. |
09:48:41 | _ingsoc: | What makes the code unimpressive? |
09:49:23 | gmaxwell: | I also think the goal is actively stupid, but in the hierarchy of goodness good > stupid > redundant; something that sounds foolish to me may turn out to be good ultimately (esp after some iteration to fix flaws the first couple times it gets knocked down and everyone gets robbed :P ) |
09:50:35 | gmaxwell: | There isn't (wasn't? it's been two weeks since I looked) much of anything there, I mostly looked at the script stuff, and it was clearly being done by someone with no expirence programming a stack machine. |
09:50:36 | _ingsoc: | Heh. People will go crazy if it flops. |
09:50:50 | _ingsoc: | Interesting. |
09:51:00 | _ingsoc: | The C++ code? |
09:51:09 | gmaxwell: | I looked at both the go and the c++ code. |
09:51:13 | _ingsoc: | Ah. |
09:51:35 | _ingsoc: | Know much about vbuterin? |
09:52:13 | gmaxwell: | ("technically turing complete, yes, but so is subtract-and-branch-if-less-than-or-equal-zero.") |
09:52:25 | scoofy: | scoofy has left #bitcoin-wizards |
09:52:34 | gmaxwell: | I've met him, seems like a nice guy, relatively quiet. I don't know him well. |
09:53:09 | _ingsoc: | It'll be interesting to see what happens in this space. Sounds better than Mastercoin at least. xD |
09:53:13 | gmaxwell: | I've been unimpressed at times with some of his writing on technical subjects, but addressing a general audience is difficult so that might not really mean much of anything. |
09:53:23 | _ingsoc: | True. |
09:53:45 | gmaxwell: | _ingsoc: I'm not sure how to distinguish it. I mean, mastercoin could _be_ this effectively. Thats one of the 'upsides' of basically selling a sheet of paper with promises. |
09:54:03 | gmaxwell: | It could become embodied in some way which is very technically different than the initial proposal. |
09:54:19 | _ingsoc: | Something about how Mastercoin is managed that makes me cringe. |
09:54:47 | _ingsoc: | Maybe if the ideas were in the right hands, I don't know. But so far it's sounded like a nightmare. |
09:54:59 | _ingsoc: | From a project management perspective. |
09:54:59 | gmaxwell: | well, I have the same cringe on the etherum goal to raise 36 million dollars, which is just insane in my opinion. |
09:55:24 | _ingsoc: | That's a bit of a misconception. They put the hard cap at 30k Bitcoin. |
09:55:38 | _ingsoc: | They were worried a whale would come and swallow up the sale. |
09:55:43 | _ingsoc: | They just need 500 BTC. |
09:56:07 | _ingsoc: | But claim to have transparent expenditure plans up to that point. |
09:56:15 | nsh: | why not... demonstrate something is viable before getting ludicrous and unnecessary capitalization? |
09:56:26 | nsh: | is that naive? i am not a business person |
09:56:28 | _ingsoc: | nsh: Ask all of Silicon Valley? |
09:56:42 | gmaxwell: | nsh: It's not naive. It's basic ethical behavior. |
09:56:50 | _ingsoc: | Agreed. |
09:56:58 | _ingsoc: | Problem is people need to eat I guess. |
09:56:59 | gmaxwell: | Especially for something which doesn't have infrastructural requirements for that sort of funding. |
09:57:05 | nsh: | * nsh nods |
09:57:18 | _ingsoc: | Their Github is supposed to be evidence of work. |
09:57:28 | _ingsoc: | Some might agree it's that, others may disagree. |
09:57:29 | gmaxwell: | _ingsoc: my living expenses are about 30k/yr and I live in one of the most expensive places to live in the world. How many people need to eat? |
09:57:37 | _ingsoc: | 22. |
09:57:46 | _ingsoc: | Well, I don't know what the proportions are. |
09:57:54 | _ingsoc: | But 4 founders. |
09:58:52 | _ingsoc: | Any prior Invictus involvement makes me nervous. Won't lie about that. |
09:59:14 | gmaxwell: | really it sounds like they're outsourcing all the risk, and I think thats not reasonable for initial development and it misaligns motives, but there is no need for me to be judgemental— people can decide if they'd like to fund it. |
09:59:32 | _ingsoc: | That's been the sentiment it seems. |
09:59:32 | gmaxwell: | And yea, well I was trying to not make any negative comments about the people. |
09:59:40 | _ingsoc: | Same. |
10:00:28 | gmaxwell: | and as I said, stupid > redundant. I'd rather have newer attempts even with dumb funding models, than more stuff that just copies the bitcoin codebase and changes ~nothing more than the name. |
10:01:22 | nsh: | the wheel of progress is oiled with the grease of fleece :) |
10:01:30 | gmaxwell: | (maybe we'll learn something; though I'm skeptical: basically no one uses the powerful scripting in bitcoin, the hard parts are UI and user education and such) |
10:03:19 | gmaxwell: | I'd like to see some of these things fail in novel ways. Etherium losing all its non-miner validators will be very interesting. I'm sad that none of the altcoins have uncapped the block size. (No "SuperScalableCoin", AFAIK). |
10:04:32 | grazs: | HaikuCoin, you must embedd a unique haiku poem to every transaction |
10:06:35 | gmaxwell: | grazs: you've seen my covenant thread? the kind of thing is possible in the form of fungibility loss if you have insufficiently constrained script. :P |
10:08:10 | grazs: | gmaxwell: no, please share :) |
10:09:08 | grazs: | btw, nice collection of alt-ideas in the wiki. it's been a nice topic of conversation among my colleagues |
10:09:56 | gmaxwell: | grazs: it's kinda old now, there is probably a bunch of things I'd add if I updated it. |
10:09:59 | gmaxwell: | grazs: https://bitcointalk.org/index.php?topic=278122.0 |
10:11:26 | grazs: | gmaxwell: I live for bad ideas, will read this at lunch! |
10:13:18 | gmaxwell: | (then general concept has positive uses too, but most _random_ ways of using that particular expressive power are really bad) |
14:13:00 | tt_zzz: | tt_zzz is now known as tacotime_ |
14:26:43 | _ingsoc: | _ingsoc is now known as Guest48443 |
15:03:46 | _ingsoc_: | _ingsoc_ is now known as _ingsoc |
15:05:28 | TD_: | TD_ is now known as TD2 |
15:18:41 | TD2: | TD2 is now known as TD |
15:21:27 | roidster: | roidster is now known as Guest40763 |
16:25:17 | TD2: | TD2 is now known as TD |
17:33:34 | otoburb: | otoburb is now known as otoburb` |
17:33:40 | otoburb`: | otoburb` is now known as otoburb |
18:37:21 | gmaxwell: | May be of interest to some here: https://lists.torproject.org/pipermail/tor-dev/2014-January/006146.html "Key revocation in Next Generation Hidden Services" |
18:38:01 | gmaxwell: | (I have a sketch for a revocation solution too... but didn't post it because I felt the protocol was too complicted to bother implementing) |
18:51:30 | maaku: | gmaxwell: how do you get by on 30k/year here? |
18:51:42 | gmaxwell: | I don't have Kids |
18:52:01 | gmaxwell: | (I don't mean this to insult having kids, but its probably the first major factor! :) ) |
18:53:38 | maaku: | yeah |
18:55:12 | gmaxwell: | Otherwise, heck if I know. A moderate amount of lifestyle hypermiling. I don't drive. (I have an old truck, but I think I only used it a dozenish times last year). I cook. I don't buy gizmos, though partally this is because I already own two lifetimes worth of gizmos from a decade ago before I intentionally started trying to minimize my cost of living. |
18:58:41 | maaku: | Yeah our rent alone is $20k/year |
18:58:43 | maaku: | If I were single, no kids, and had housemates I guess that'd be plenty doable |
19:08:34 | petertodd: | gmaxwell: re: msc/ether I've been arguing quite strongly that msc either be based on ethereum, do it better, or merge the projects |
19:09:02 | gmaxwell: | petertodd: sounds reasonable to me. |
19:09:29 | gmaxwell: | (whatever reservations I have on the ideas, they're not made worse by merging them, and may well be reduced by them) |
19:09:56 | petertodd: | gmaxwell: yup, and msc seems to have a number of people actually focused on gui's, workflows and other usually ignored details |
19:13:12 | phantomcircuit: | msc? |
19:13:18 | petertodd: | msc==mastercoin |
19:13:36 | phantomcircuit: | oh |
19:14:06 | phantomcircuit: | gmaxwell, it would be interesting to build a merged mine altcoin which does a bunch of stupid shit like that |
19:14:11 | phantomcircuit: | just to see what would happen |
19:15:25 | phantomcircuit: | gmaxwell, 2.5k/month in mountainview? im thinking the key is you split rent with your gf |
19:16:08 | gmaxwell: | phantomcircuit: no, actually the 30k figure is including all the shared expenses. |
19:16:39 | phantomcircuit: | does that include your electric bill? lol |
19:16:43 | petertodd: | gmaxwell, adam3us: still need to reply to your IBE ideas; got some paid work to get done first though on a deadline |
19:17:25 | gmaxwell: | phantomcircuit: It doesn't include my (e.g. mining) business expenses (which I already account for seperately), nor should it, since that stuff is self funding. |
19:17:51 | phantomcircuit: | i was kidding |
19:18:06 | gmaxwell: | my non-mining electricity usage is like $30/month. :P |
19:18:21 | phantomcircuit: | without monthly vehicle costs you can live pretty much anywhere in the us for relatively little |
19:18:48 | phantomcircuit: | rent/utilities/internet/food |
19:18:55 | petertodd: | phantomcircuit: aside from the problem that in many places in the us you can't live without that vehicle :) |
19:19:12 | gmaxwell: | petertodd: you _can_ but it requires careful consideration and effort. |
19:19:55 | gmaxwell: | at least any town with a population over 50k or so at least has some place in it that you could reasonably live without a car or at least without frequent use of a car. |
19:20:17 | petertodd: | gmaxwell: right, I'm including <50k in that statement |
19:20:17 | gmaxwell: | but some kung fu balancing of needs is required. |
19:20:21 | maaku: | phantomcircuit: i don't, rent is a big issue in many places (silicon valley, nyc, dc, ...) |
19:20:22 | phantomcircuit: | petertodd, i imagine it's fairly complicated to live in mountain view without a car also |
19:20:45 | petertodd: | gmaxwell: the US also has places >50k with no public transport what-so-ever |
19:20:49 | maaku: | my wife and I are considering a move to montreal just to cut expenses... |
19:21:21 | petertodd: | phantomcircuit: heh, when I interviewed at google in mountain view I took the bus to the airport to get a sense of how screwed up the place was... |
19:21:37 | maaku: | phantomcircuit: actually mtnview is not that bad. it's well connected by train, lightrail, and bike paths |
19:21:59 | petertodd: | maaku: where are you now? |
19:22:02 | maaku: | but the rent differential is many factors more than car payments would be |
19:22:10 | maaku: | petertodd: san jose |
19:22:22 | maaku: | used to live in mountain view |
19:22:24 | petertodd: | maaku: didn't realize montreal was an option for you |
19:22:38 | phantomcircuit: | maaku, if you're single and dont care about roommates you can get a room in sf for ~800/month |
19:23:14 | gmaxwell: | phantomcircuit: nah. not at all, at least without kids. There are three supermarkets within a 10 minute walk of where I live, and the caltrain station (though it's pretty expensive, so it would dent the COL if I had to use it daily and pay for it) |
19:24:07 | maaku: | petertodd: well it's not per se, but it's easier for freelance americans to get a visa to canada than many other places |
19:24:13 | tromp__: | a car also becomes something of a necessity when you're no longer single... |
19:24:23 | tromp__: | and not living in a big city |
19:24:44 | petertodd: | maaku: ah. why montreal vs. toronto or something? |
19:25:31 | sipa: | tromp__: so you go from 2 single people each having no car, to a couple of two people each having a car? :) |
19:25:49 | gmaxwell: | tromp__: I'm not single, I haven't been single for >10 years... and I live in the suburbs. There certantly are places where a car really is mandatory, but in a lot of places (and not just crazy big cities) it is possible to organize your life so that you need to use a car very infrequently. |
19:26:20 | tromp__: | in my case my fiancee relied on a car alrd |
19:27:21 | tromp__: | when she moved here (selling her old car) i went and got my us driver's license and bought a car |
19:27:49 | maaku: | petertodd: I have a cousin who is a permanent resident in Montreal & I've stayed with him for some conferences. Love the city, local tech industry, and quebec culture. |
19:28:04 | maaku: | From what I hear toronto would probably be a 2nd choice, but I've never visited |
19:28:21 | petertodd: | maaku: imo montreal > toronto re: beauty/culture/etc. |
19:28:33 | gmaxwell: | e.g. choosing work that is in proximity to reasonably priced places to live and groceries, and then living close to work. Owning a bike with some reasonable cargo accommodations. |
19:28:43 | tromp__: | i commute by bike everyday |
19:29:09 | maaku: | petertodd: yeah for me now that's the bigger concern ... thanks to bitcoin we can live anywhere |
19:30:13 | petertodd: | maaku: you know, rural iran is really beautiful in the mountains |
19:30:31 | maaku: | hahahaha |
19:33:26 | maaku: | seriously, we considered places like bali and thailand. but having a family means giving priority to things like access to health care and schooling :\ |
19:33:40 | petertodd: | maaku: heh, had a long discussion with my dad along those lines a few months ago actually - he job is head of regional economic development in the nwt (way north canada) and I was pointing out how in theory all these remote communities could easily have thriving economies with people doing remote telecommuting IT work and hunting... but of course that doesn't happen |
19:33:52 | petertodd: | maaku: yeah, I like first world for that... |
19:34:58 | gmaxwell: | * gmaxwell waits for adam3us to suggest malta. |
19:36:09 | sipa: | if you like high rent and public transport that actually works, zurich isn't bad :) |
19:40:25 | maaku: | petertodd: probably more potential for arctic air-cooled data centers like we see in iceland and sweden |
19:40:55 | petertodd: | maaku: potential maybe, but right now the electricity infrastructure sucks and would cost billions to improve |
19:41:03 | maaku: | ah |
19:41:04 | petertodd: | maaku: not much generation capacity up there |
19:43:57 | petertodd: | maaku: it's a serious problem for the mines - a few km from my parents house is the transfer station for diesel, which has a tank farm with the same volume as a large sports stadium, and that's not even a full season worth of fuel |
19:44:37 | petertodd: | maaku: I worked it out once and that one farm had capacity for something like 6 hours of the worlds supply of oil |
19:45:28 | petertodd: | (all the mines use diesel generators for their electric supply) |
19:46:37 | gmaxwell: | petertodd: electricity infrastructure: https://bitcointalk.org/index.php?topic=170332.msg4808083#msg4808083 |
19:47:37 | maaku: | jeeze, you'd think there'd be wind, or geothermal (near the ring of fire at least) |
19:47:39 | petertodd: | gmaxwell: his electrician fucked that up big time... |
19:49:47 | petertodd: | gmaxwell: it should never be possible to do damage like that to any part of your electric wiring no matter how badly you abuse it if everything is done to code with proper-sized fuses |
19:51:11 | gmaxwell: | petertodd: yep, well apparently the _meter_ caught fire?! |
19:52:58 | petertodd: | gmaxwell: I have to wonder if someone modified it before, say to bypass something... |
19:53:26 | petertodd: | gmaxwell: I *think* meters are actually protected by a fuse at the pole in many places - haven't looked at the codebooks in years |
19:55:50 | gmaxwell: | petertodd: yes, they are protected by a pole fuse, though sometimes the pole fuses get shorted and don't work. |
19:56:13 | gmaxwell: | they're also really slow. |
19:56:37 | gmaxwell: | (I've blown one once, so I'm speaking first hand.) |
19:57:16 | petertodd: | gmaxwell: yup, probably a bad substitute. one of the harder parts of power engineering is that the timeconstants of your fuses matter and have to be matched to the equipment |
20:27:31 | Emcy: | impressive |
20:27:44 | nsh_: | nsh_ is now known as nsh |
20:28:24 | Emcy: | not quite as impressive as the kid who gave himself brain damage by sleeping int he same room as 20 radeons or something |
20:29:02 | petertodd: | Emcy: heat exhaustion? |
20:29:15 | Emcy: | yea |
20:29:31 | Emcy: | heatstroke |
20:29:34 | nsh: | i once gave myself brain damage by inhabiting a space with 20 radians per revolution |
20:29:37 | petertodd: | I was worried you were gonna say EMF pollution :P |
20:29:45 | gmaxwell: | Emcy: Pretty sure that was BS. |
20:30:02 | petertodd: | nsh: non-euclidian geometry kills |
20:30:04 | Emcy: | gmaxwell perhaps but its a nice bit of bitcoin folklore |
20:30:05 | nsh: | hehe |
20:30:27 | nsh: | Riemannean Manifolds: Just Say No |
20:31:35 | petertodd: | nsh: I felt the brain damage coming on while trying to add ECDH support to python-bitcoinlib last night... manifolds >> openssl I'm sure |
20:31:54 | nsh: | eek. how did it go? |
20:32:17 | Emcy: | how many amps/watts can an american household push then |
20:32:26 | Emcy: | i think its surprisingly low? due to 120v |
20:32:33 | petertodd: | nsh: seems to work, needs unittests with test-cases derived from something else though |
20:32:55 | nsh: | * nsh nods |
20:33:09 | nsh: | on github? |
20:33:11 | petertodd: | Emcy: 200A service is fairly common, which is 24kW in theory |
20:33:20 | gmaxwell: | petertodd: per phase. |
20:33:23 | petertodd: | nsh: not yet, but I can if you want to |
20:33:24 | gmaxwell: | 'phase' |
20:33:36 | gmaxwell: | Emcy: 160 and 200amp breakers (at 240v) is pretty typical. so about 40-48KW. |
20:33:36 | petertodd: | gmaxwell: oh right! so double that for US-style wiring |
20:33:36 | nsh: | well, no rush on my part, but i'd like to see it whenever |
20:33:51 | petertodd: | nsh: if you can come up with a soruce of test cased that'd be great! |
20:33:56 | petertodd: | *cases |
20:33:58 | nsh: | * nsh nods |
20:34:37 | petertodd: | Emcy: basically, you can easily spend ~$5/hour on electricity :) |
20:34:59 | Emcy: | you have a special 240v circuit? |
20:35:17 | petertodd: | Emcy: all us-style-wiring houses do actually |
20:35:30 | nsh: | people would draw ridiculous currents with massive over-the-top christmas lighting set-ups, before LEDs replaced a lot of the incandescent bulbs |
20:35:33 | gmaxwell: | Emcy: in the US our power is really 240v but wired with a center tap on the transformer so you can get 120 or 240 volts depending on how you're wired up. |
20:35:38 | petertodd: | Emcy: basically you have two 180 degree out of phase circuits referenced to ground, so 240V across the two |
20:35:53 | Emcy: | interesting |
20:36:40 | gmaxwell: | There are three lines down from the poll, Hot, Neutral, Hot, and between the two hots you have 240v. Between any hot and the neutral you have 120v. |
20:36:55 | petertodd: | Emcy: norway (?) and a few other countries routinely run three phase into the home actually, so that's three 120deg out of phase wires |
20:37:03 | Emcy: | we have an earth pin instead ^^ |
20:37:13 | gmaxwell: | Big applicances (electric stoves and dryers and air conditoners) are wired on 240v, the rest is usually wired up to 120v. |
20:37:14 | petertodd: | Emcy: no, everyone has earth (nearly) |
20:38:00 | Emcy: | your plugs have 2 pins, so i ssumed everything is double insulated |
20:38:00 | petertodd: | Emcy: earth is for safety, not for current |
20:38:04 | gmaxwell: | Emcy: we have an earth pin too. (and usually the neutral is also tied to earth, but some distance away, so it's not a great earth ground on its own) |
20:38:13 | Emcy: | lots of UK stuff has a dummy earth pin |
20:38:37 | petertodd: | Emcy: US does that too, it's just a engineering decision as to whether to use the earth pin or not to meet the safety requirments |
20:38:44 | gmaxwell: | (also if the neutral comes disconnected at the poll, all your appliances end up in series in between the 240v, and the neutral becomes electrified relative to ground, and bad things happen like fire. :P ) |
20:39:18 | petertodd: | Emcy: pretty much anything with a metal case exposed to the user will use it as that makes it easy to keep the case at zero potential, but exceptions apply in both directions |
20:39:33 | Emcy: | two phase seems complicated for domestic wiring tbh |
20:39:45 | Emcy: | over here we have 30A cooker circuits for the big stuff but thats it |
20:39:58 | petertodd: | Emcy: nah, it's really simple actually, and makes meeting safety specs easier |
20:40:01 | adam3us: | so yes well i picked malta for a reason (very scientific, spread sheet involving a dozen factors and it came out on top for my preferences) i used to live in montreal for 3yrs when i was at ZKS, its not bad; i also spent time in zurich, my mom is from there, I like it a lot |
20:40:31 | adam3us: | apropos of telecommuting locations. have been doing it in malta >5 yrs now :) |
20:40:41 | gmaxwell: | adam3us: I wasn't aware of that, but I had the impression that your decision to live there was carefully considered. |
20:40:43 | petertodd: | Emcy: see, you guys have 240V to earth, so you need 240V-rated insulation, while we get away with just 120V insulation yet get the same advantage of 240V for high power stuff |
20:40:55 | Emcy: | and ive never understood how neutral is thus called when it will happily kill you dead too |
20:41:20 | petertodd: | Emcy: thing is, as it turns out 240V insulation safety isn't that hard, so just using 240V would be ok too - but that's not changing now |
20:41:21 | Emcy: | petertodd that makes sense |
20:41:28 | petertodd: | Emcy: you guys have neutral too actually |
20:41:45 | Emcy: | yes i know, its the blue one. But its still hot |
20:42:00 | petertodd: | Emcy: yes and no. it's only hot in the sense that it *can* be hot |
20:42:19 | petertodd: | Emcy: like, if you touch neutral, 99% of the time you'll be fine, but if you touch earth, 99.999% of the time you'll be fine :) |
20:42:37 | Emcy: | i like those odds :D |
20:42:37 | gmaxwell: | petertodd: well if it's not really well bonded to ground it may often have some residual potential. |
20:43:14 | gmaxwell: | e.g. neutral in my dads house was often 30 volts relative to a good earth ground and electronics whos cases ended up connected to neutral would arc against stuff. |
20:43:15 | Emcy: | ive gotten a smack of an earth pin before. I learned about PD |
20:43:17 | petertodd: | gmaxwell: exactly, and even if it is there's some voltage due to voltage drop |
20:43:26 | adam3us: | gmaxwell: from your skimming the delayed private key gen IBE seems interesting did u get the impression that could do the one of the NIFS sub-problems of having the private keys be in some sequence so you could compute forward but not backward? any idea of the hardness assumptions more or less conservative that weil-pairng? |
20:43:27 | Emcy: | ive gotten a smack off a tv tube too :( |
20:43:38 | petertodd: | Emcy: yeah, those are dangerous... |
20:43:45 | petertodd: | Emcy: you could have easily been killed there |
20:43:58 | Emcy: | yes |
20:44:14 | Emcy: | it wasnt even plugged in, just charged |
20:44:33 | petertodd: | Emcy: the problem with electric shock is parts of your body can withstand *much* higher currents than others - like any time you even feel a shock, that's actually enough to stop your heart, but 99.9% of the time the current isn't in the right place |
20:44:54 | Emcy: | from memory its 30mA across the heart |
20:44:56 | petertodd: | Emcy: so people get complacent when nothing ever happens, when in reality nothing happened only because the current bypassed their heart |
20:44:58 | gmaxwell: | adam3us: I didn't contemplate it. I was mostly trying to figure out if I could make the data smaller. Do you see a big need for forward only? My thinking is that sending a new key for every block/day whatever isn't a big overhead... and we actually want a filtering node to stop filtering when we're not connected. |
20:45:02 | Emcy: | and skin resisteance is 40v or so dry |
20:45:14 | petertodd: | Emcy: more like 1mA directly applied to the heart IIRC |
20:45:22 | Emcy: | i was taught to work with one hand wherever possible :) |
20:45:33 | petertodd: | Emcy: 40V is a voltage, not a resistance :) but yeah, <48V tends to be safe pretty much wherever due to skin resistance |
20:45:53 | petertodd: | Emcy: however, something as simple as a probe cutting into your skin can lower the resistance enough to get you killed |
20:46:05 | petertodd: | Emcy: very good avice |
20:46:14 | andytoshi: | petertodd: i think he means there is a breakdown voltage of ~40v. i have heard this too but i don't think it's true |
20:46:25 | Emcy: | i mean ~40v before a bad current gets going |
20:46:36 | Emcy: | but youre right, humans are not zeners lol |
20:46:43 | petertodd: | andytoshi: it's very true, well-documented cases of that |
20:47:00 | adam3us: | gmaxwell: not strongly interesting for bitcoin reusable addr i guess. fwd-secrecy i was just noticing in passig the other day could have some nominal value perhaps like if your disk got compromised, you couldnt even correlate your own old tx never mind help a full node do it :) |
20:47:08 | petertodd: | andytoshi: medical power supplies are orders of magnitude better isolated because of that - even static shocks can be life threatening when your chest is opened up |
20:47:31 | Emcy: | hmm thats a point |
20:47:48 | petertodd: | Emcy: yeah, but fortunately, when you're chest is opened up normally you're in the best possible place to get a heart attack :) |
20:48:06 | andytoshi: | psh. i always keep my chest open for easy maintenance |
20:48:12 | petertodd: | Emcy: the real safety concern there is actually that anasthetic gasses are often flammable |
20:48:18 | Emcy: | petertodd hell some treatments require it lol |
20:49:52 | gmaxwell: | petertodd: did you know that conman is an honest to god anesthesiologist? I'd thought the whole putting people to sleep thing was incompatible with his templerment, but not that you mention the flammability. :P |
20:50:43 | petertodd: | gmaxwell: lol! is that the same guys that's a kernel dev? |
20:50:49 | adam3us: | about the non-transferable sigs (in store-and-forward comms) various permutations ian brown & I wrote some basic ideas for pgp http://www0.cs.ucl.ac.uk/staff/I.Brown/nts.htm gmaxwell explained it fine abve Ian even drew pretty pictures. |
20:50:51 | gmaxwell: | yes. |
20:51:08 | petertodd: | gmaxwell: sheesh, some people just make you feel inadequate :P |
20:51:23 | Emcy: | adam3us are you adam back? |
20:51:43 | adam3us: | Emcy: yeah |
20:52:02 | Emcy: | ok |
20:52:31 | petertodd: | adam3us: heh, I've done that protocol by hand before |
20:52:31 | gmaxwell: | adam3us: I can't fathom why pgp still forces non-repudiation onto people after all this time, what it does is something basically no one wants. If you want encryption + non-repudiation what you want it a clearsigned message which is encrypted— so that you can show it to people without dealing with their inability to decrypt. |
20:53:05 | adam3us: | gmaxwell: its horrendous. mostly u do NOT want non-repudiability period IMO |
20:53:27 | gmaxwell: | yea, I mean, it's useful from time to time. But you always know when you want it. |
20:53:49 | adam3us: | gmaxwell: exactly. 99% of the time its unnecessary risk |
20:54:28 | petertodd: | adam3us: I'd love to see some court cases where this has actually come up - as I said on cryptography in reality repudation is hard to achive anyway |
20:55:11 | adam3us: | petertodd: yeah as i read it courts just make pragmatic decisions... preponderance of evidence bla blah. but OTR with no logging is good. |
20:55:40 | petertodd: | adam3us: for professional contexts, non-repudation+encryption is what's generally needed, as well as logging, and key recovery... |
20:55:53 | petertodd: | adam3us: (or at least, what they have to claim is needed!) |
20:56:29 | gmaxwell: | adam3us: wrt the link. One detail is that is that if Alice signs the symmetric key it proves that alice communicated using that symmetric key. If instead you have a construction where you do an Alice.Bob ECDH, with no signing, then Bob can't prove that alice ever send a message at all... which is slightly stronger. |
20:56:59 | petertodd: | adam3us: see, repudation + timestamping could be a interesting mix legally, as the timestamp will often be non-repudation evidence, yet it's something anyone can apply |
20:57:04 | gmaxwell: | petertodd: sure, the world is complicated, but the crypto should never make you _worse_ off. |
20:57:31 | gmaxwell: | and generally adding non-repudiation where you didn't think it was there and didn't want it at least theoretically makes you worse off. |
20:58:11 | petertodd: | gmaxwell: right, but that's not a consideration when a company decides whether or not they want to pay PGP-corp a bunch of money - they want to tout their better security, and in corporate environments you usually (publicly) want non-repudation |
20:58:15 | gmaxwell: | Oh and fwiw, as far as the logs on my computer are concerned, you're all underground drug dealers. I forge logs locally when I'm bored. Sorry. |
20:58:30 | petertodd: | gmaxwell: heh, I timestamp mine |
20:58:48 | petertodd: | gmaxwell: (it's a pain in the ass constantly having to make forged ones though) |
20:58:50 | Emcy: | thanks greg |
20:59:00 | adam3us: | gmaxwell: yes. that sounds better. |
20:59:46 | petertodd: | Anyway, there's room for both, so I support the new PGP "private-sign" option that I'm sure someone will implement Real Soon. :) |
21:00:34 | gmaxwell: | yea, I am certantly a fan of non-repudiation existing. Heck, I used it 24 hours ago... we use it for software releases. It's useful.. just not usually what we want for email. |
21:01:26 | petertodd: | gmaxwell: and I used encrypted and signed non-repudation for the ltc security audit, and some other still-private stuff like it |
21:02:37 | gmaxwell: | I also think non-repuidation should almost always be coupled with timestamping just as a norm. Otherwise you can repudiate too easily by 'losing' control of your private key, thats harder if you have timestamps. |
21:02:44 | adam3us: | gmaxwell: it seems like a ringsig in effect. saw you said ring sig above so probably you said that. colin plumb had another one involving xor and rsa keys with same effect. |
21:03:48 | petertodd: | gmaxwell: indeed - I probably have the only crypto authenticatable copy of jdillon's emails for instance, and to prove the timestamps would take some munging around in git and some privacy exposure due to how git works internally |
21:04:30 | petertodd: | Anyway, main sticking point there for me personally do implement that is there's no decent OpenPGP libraries out there, other than Bouncy Castle, and I know nothing about Java. |
21:09:08 | tromp__: | have any of you read up on the Cuckoo Cycle PoW? |
21:09:13 | nsh: | (on the subject of crypto and legality: i'll be violating a (UK law) RIPA s.49 order 'requiring' disclosure of decryption keys on Friday midday under penalty of two years imprisonment, in theory.) |
21:09:18 | petertodd: | tromp__: it's not asic hard at all |
21:09:31 | petertodd: | nsh: ? |
21:09:40 | tromp__: | how wld an asic get a speedup? |
21:10:04 | petertodd: | tromp__: you're asking the wrong question - we don't care about speedups, we care about cheaper running/overall costs |
21:10:57 | nsh: | petertodd, just some... silliness -- but it will probably lead to some interesting courtroom arguments somewhere down the line, if they choose to push it |
21:11:22 | nsh: | ( https://en.wikipedia.org/wiki/Key_disclosure_law#United_Kingdom ) |
21:11:43 | petertodd: | nsh: huh, is the case public? |
21:12:16 | nsh: | the UK case isn't public as i haven't been charged, but some idiots in virginia have charged me. if you google "nsh indictment" there's a couple of pdfs on justice.gov |
21:12:47 | nsh: | (i can't talk about allegations, etc.) |
21:13:06 | tromp__: | i don't see how an asic wld run much cheaper. it would need to have GBs of memory |
21:13:12 | petertodd: | nsh: ha, good luck |
21:13:37 | petertodd: | tromp__: again, you're asking the wrong question. What drives running costs? |
21:14:03 | tromp__: | cost of RAM |
21:14:10 | nsh: | thanks :) |
21:14:11 | petertodd: | tromp__: no, power |
21:14:21 | tromp__: | no, not for a latency constrained pow |
21:14:38 | petertodd: | tromp__: cuckoo is parallelizable |
21:14:43 | nsh: | tromp__, you have to think about scaling as a function of the amount of work you want to do. eventually it's always power |
21:14:49 | nsh: | as the other costs don't scale with work |
21:14:55 | tromp__: | it's not parallellizable |
21:15:37 | tromp__: | read the paper to see how it detects cycles |
21:15:50 | petertodd: | tromp__: physically speaking memory has lots of long wires all over the die, and since cuckoo is parallelizable your best implementation will shorten those wires with special-purpose "routers" the pass around incomplete cuckoo attempts between those memory cells until it finds a cycle that works |
21:16:11 | petertodd: | tromp__: I have read that paper, either you make cuckoo not efficiently verifiable, or you make it parallelizable |
21:16:36 | tromp__: | it's trivially verifiable, and not parallellizable |
21:16:41 | petertodd: | tromp__: thing is that architecture is totally custom, yet will reduce power because driving a short wire is uses less energy than a long one |
21:16:51 | petertodd: | tromp__: look, just saying that doesn't make it true |
21:17:08 | tromp__: | how would you parallellilze it? |
21:17:27 | petertodd: | tromp__: simple, use the same block of memory and have multiple attempts at finding a cycle go on at once |
21:18:10 | tromp__: | you cannot do that. all the memory must be used for a single attempt |
21:18:20 | petertodd: | tromp__: get a pad of grid paper out and draw a big block of memory, and think what happens on each step of the cycle, and quite literally how to physically get that information to the part of the memory for the next step in the cycle |
21:18:41 | petertodd: | tromp__: either you use all the memory for an attempt, and it's not efficiently verifiable, or you don't, and it's aprallelizable |
21:18:56 | tromp__: | you have not understood the paper |
21:19:09 | petertodd: | tromp__: ok, explain to me what you think happens |
21:19:54 | tromp__: | 1st of all, its trivially verifiable because you jsut generate the 42 edges and check they form a cycle |
21:20:05 | tromp__: | this is what my verify.c does |
21:20:17 | petertodd: | tromp__: ok, so lets get into detail: what does generating those edges mean exactly? |
21:20:40 | nsh: | * nsh (is silently auditing this conversation) |
21:20:47 | tromp__: | compute hash(header||nonce)) and extract the two endpoints from it |
21:20:54 | petertodd: | nsh: for quality I hope |
21:21:02 | petertodd: | tromp__: right, and how much data does that need? |
21:21:17 | tromp__: | header and 42 nonces |
21:21:17 | nsh: | (my edification mostly, not really qualified to assess the quality except through my own lens, darkly) |
21:22:10 | petertodd: | tromp__: right, how does the verifier know the nonces are valid? because they form a cycle right? |
21:22:37 | tromp__: | because the corresponding edges form a cycle |
21:22:45 | petertodd: | tromp__: exactly |
21:23:25 | petertodd: | tromp__: ok, so lets look at how you find those edges: map a given edge location to a address in memory associated with a nonce, and keep searching until you find a cycle right? |
21:23:45 | tromp__: | not at all |
21:23:54 | petertodd: | tromp__: ok, so explain to me |
21:23:54 | tromp__: | you maintain the directed cuckoo graph |
21:24:10 | petertodd: | tromp__: yes, and where is that graph stored? |
21:24:18 | petertodd: | tromp__: (and how?) |
21:24:38 | tromp__: | in a huge array |
21:24:50 | tacotime_: | Is this anything like hamhash? |
21:24:52 | tromp__: | 32 bits per node |
21:25:14 | petertodd: | tromp__: ok, so addr:nonce? |
21:25:20 | petertodd: | tromp__: (32-bit nonce)? |
21:25:26 | tromp__: | no; nonces are not stored |
21:25:30 | tacotime_: | http://jones.math.unibas.ch/~massierer/theses/massierer-hons.pdf |
21:25:33 | tromp__: | they're forgotten to save memory |
21:25:36 | petertodd: | tromp__: ok, so what is in that array? |
21:25:46 | tromp__: | only the directed cuckoo graph is maintained |
21:25:59 | petertodd: | tromp__: but lets get into detail, what is in that array? |
21:26:10 | tromp__: | pls read the latest write-up, it has some expanded sections from the first writeup |
21:26:25 | tromp__: | ok the array has N0+N1 slots |
21:26:49 | tromp__: | cuckoo[i] points to the alternate slot that a key could occupy |
21:26:50 | petertodd: | tromp__: the writeup at eprint.iacr.org/2014/059.pdf? |
21:26:53 | tromp__: | yes |
21:27:00 | petertodd: | tromp__: right, that's the one I read |
21:27:13 | tacotime_: | Thanks. |
21:27:40 | tromp__: | if a nonce generates edge (i,j), then you have to end up setting either cuckoo[i] = j, or cuckoo[j] = i |
21:27:51 | petertodd: | tromp__: yeah |
21:28:10 | tromp__: | but the algo also checks if this edge is forming a cycle |
21:28:14 | petertodd: | tromp__: so my point is, you keep modifying that array until the path through it forms a cycle |
21:28:24 | petertodd: | tromp__: or, you keep guessing new nonces until it does |
21:29:19 | tromp__: | but when you dont form a cycle, you still need to reverse a path fromeither i or j to the endpoint |
21:29:38 | petertodd: | what do you mean by "reverse a path"? |
21:29:55 | tromp__: | reverse the direction of each edge on the path |
21:30:24 | tromp__: | which corresponds to each key displacing the next n cuckoo hashing |
21:30:30 | tromp__: | n->in |
21:30:31 | petertodd: | tromp__: how does the PoW verifier know that I did that? IE why does reversing it matter? |
21:30:48 | tromp__: | the verifier doesnt care HOW you found the cycle |
21:31:07 | tromp__: | it just happens that cuckoo hashing is the seemingly most efficient way to find cycles |
21:31:07 | petertodd: | tromp__: exactly, so I assume this reversing thing must have a performance advantage |
21:31:18 | andytoshi: | andytoshi has left #bitcoin-wizards |
21:31:39 | tromp__: | you need to reverse in order to be able to store the edge (i,j) |
21:31:52 | petertodd: | now, back to my main point: why can't I parallelize that? I have a n port memory block, so I just have n different cuckoo cycle-finding attempts running in parallel |
21:32:14 | tromp__: | because prior to insertion both cuckoo[i] and cukoo[j] may alrd point elsewhere |
21:32:39 | tromp__: | because the paths from one attemp will totally screw up the paths from the opther |
21:32:48 | petertodd: | so what? sometimes these attempts will collide, but that's just a probability thing, we can discard those failed attempts |
21:33:03 | petertodd: | I'm still getting parallelism |
21:33:37 | tromp__: | no you'll almost never be able to follow a long path of edges all from one attempt |
21:33:51 | petertodd: | tromp__: how long is long? |
21:34:04 | tromp__: | to find a 42 cycle, you'll need to follow for instance paths of length 21 from each of i and j |
21:34:24 | tromp__: | and all these 41 edges you follow MUST be from the same attempt |
21:34:40 | petertodd: | (btw, the magic word here is birthday) |
21:34:43 | tromp__: | so your odds of running even 2 instances in parallel are about 2^-41 |
21:34:46 | tromp__: | good luck with that |
21:35:17 | petertodd: | ah, but are you sure I can't be more clever than that? |
21:35:22 | tromp__: | my paper analyses a more sensible case of trying to reduce memory |
21:35:53 | tromp__: | i cannot prove it, but i'm pretty sure |
21:36:19 | tromp__: | i'll bet money on it |
21:36:29 | petertodd: | like, suppose handle collissions by quickly grabbing an adjacent memory cell to temporarily store the extra data? |
21:36:40 | petertodd: | that's the kind of thing a custom ASIC could be engineered to do cheaply |
21:36:47 | petertodd: | *suppose I |
21:37:06 | tromp__: | then you're essentially creating a bucket instead of a single slot |
21:37:15 | petertodd: | tromp__: sure, but I can do that really cheaply! |
21:37:42 | tromp__: | no, adjacent slots will mostly be in use |
21:37:52 | petertodd: | tromp__: why? |
21:38:32 | tromp__: | because you''ll be at a load of close to 50% before you find cycles |
21:38:39 | petertodd: | for instance, with my grid of small memory bank architecture I can easily have the circuits for each small bank handle that deconfliction |
21:38:44 | tromp__: | so almost half of all slots are filled |
21:39:31 | petertodd: | tromp__: right, but remember all that matters is we find a short cycle |
21:39:53 | tromp__: | plus the administrative overhead of keeping track of which slots store an i edge of an i-1/i+1 edge will kill you |
21:40:04 | petertodd: | in software it'd kill you, in hardware it won't |
21:40:06 | tromp__: | yes, if you call 42 short |
21:40:44 | petertodd: | 42 is short compared to hundreds of mb |
21:41:04 | tromp__: | basically, if you try to use shortcuts for edges that work 90% of the time, then you'l still be only 0.9^42 effevtive |
21:41:10 | tromp__: | which is negligably small |
21:42:11 | tromp__: | cuckoo makes you use most of N * 32 bits for a single attempt |
21:42:20 | petertodd: | you're still not getting it... let me try another argument |
21:42:29 | petertodd: | so remember what I was saying about how memory works? |
21:42:52 | petertodd: | even in the *single* attempt case, a routed memory architecture uses a lot less power than a standard one |
21:42:55 | tromp__: | let me ask a qst first |
21:43:02 | petertodd: | qst? |
21:43:26 | tromp__: | if you think you can run multiple instances within memory, are you claiming that you can run cuckoo with half the designed memory? |
21:43:38 | petertodd: | tromp__: no, I'm claiming I can run it in less power |
21:44:03 | tromp__: | power is alrd pretty small since most time is spent waiting for memory latency |
21:44:22 | petertodd: | if you think power is what matters then you don't understand the economics of PoW... |
21:44:46 | tromp__: | you assume that PoW must be dominated by cpu bound computation |
21:44:55 | petertodd: | you're always in the situation where if you use the equipment for more than a few months power costs more than the equipment |
21:45:21 | tromp__: | that's why cuckoo is different. |
21:45:33 | tromp__: | you'll be spending way more on RAM prices than on power |
21:46:11 | petertodd: | if you want me to believe that, then get a hardware designer to analyse your design, you haven't done that |
21:48:02 | tromp__: | i just want you to believe that you cannot feasibly run cuckoo within half the designated memory, even if you add lots of non-memory asics |
21:48:24 | petertodd: | tromp__: which I'm not claiming - asics can be memory optimized too you know |
21:48:49 | petertodd: | a interesting construction technique for that is to take a memory die and overlay it with a non-memory die actually - extremely low latency, and totally custom |
21:49:43 | tromp__: | since cuckoo really randomly access the random-access-memory, it will be hard to optimize memory layout |
21:49:45 | petertodd: | could be a good way to do the routed memory option actually, and then use power-gating to turn off whatever part of the dies isn't being used for computation, as well as put the dram's into lower power modes |
21:50:08 | petertodd: | you don't have to optimize layout, you optimize the wiring that gets the signals to and from the memory cells |
21:50:28 | petertodd: | like I said, you burn a lot of power getting the data from the dram cell to the processor and back - shorten those wires and the hwole thing uses a lot less power |
21:50:51 | petertodd: | how do you shorten them? crazy custom asics, and die-on-die is a pretty solid way to do that |
21:51:14 | petertodd: | you also get lower latency by shortening them, and you *did* say cuckoo is latency hard... |
21:51:28 | tromp__: | any such optimizatoin would benefit existing ram chips as well. we can assume that samsung alrd optimized their memory chips pretty well |
21:52:08 | petertodd: | no they won't, dram is constrained by the fact that it has to be general purpose, I'm saying you can optimize for latency by placing a asic with the computational part of the circuit - not much - directly on top of the memory die |
21:52:41 | petertodd: | remember that L1 and L2 cache is basically that same strategy, but with tradeoffs due to all the computational circuits needed in a modern processor |
21:52:57 | tromp__: | the computational part of cuckoo is really small. just one hash per edge |
21:53:07 | petertodd: | exactly! that's a huge problem |
21:53:21 | tromp__: | whereas you need to do 3.3 memory reads and 1.75 memory writes per edge on avg |
21:53:33 | tromp__: | so it's really dominated by latency |
21:53:42 | petertodd: | so my custom asic die can be those tiny little hashing units scattered all over the place, and my custom memory die can have a lot of read/write ports so that the wires to the closest hashing unit are short, thus reducing the latency |
21:53:50 | tromp__: | putting hash circuits on your memory die doesnt help much |
21:54:07 | petertodd: | once you find your hash, then the wires to the *next* memory cell/hashing unit can also be short |
21:54:20 | petertodd: | tromp__: if you think that doesn't help much, you don't think L1/L2 cache helps either |
21:54:32 | tromp__: | all the memmory accesses still need to be coordinated to properly follow the paths |
21:54:42 | tromp__: | and reverse parts |
21:54:52 | petertodd: | so? that can be done locally with custom routing circuitry dedicated to that task |
21:54:59 | tromp__: | for cuckoo, L1/L2 cache will be quite useless |
21:55:43 | petertodd: | yes, only because it's so small, I'm telling you how to make essentially a custom GPU dedicated to hashing with distributed memory to keep latencies down |
21:56:11 | tromp__: | your hashers will be idle 99.999% of the time |
21:56:22 | petertodd: | and that's a good thing! when they're idle they use no power |
21:56:47 | petertodd: | in fact you'd probably do best with a really custom async-logic implementation of this so you don't have to route clock signals a long distance |
21:56:51 | tromp__: | and have no benefit over a single hasher doing all the hashing work |
21:57:06 | petertodd: | yes you do, getting the data to and from that hashing uses a lot of power |
21:57:23 | tromp__: | you cannot avoid the latency induced by having to coordinate values read from random memory locations |
21:57:48 | tromp__: | no matter what wiring, the distance between 2 random memory locations is still large |
21:57:52 | petertodd: | yes I do, my hashing circuitry and memory routing circuitry is physically located closer to the cells than before, so speed of light is short |
21:58:13 | petertodd: | nope, I can do far more efficiently if the computation and routing happens on the same die and/or module |
21:58:43 | petertodd: | remember, the reason why main memory access are so slow is because of the speed of light - I've proposing a design that shortens all those distances drasticly |
21:59:26 | tromp__: | your not shortening the distance from random location cuckoo[i] to random location cuckoo[j] |
21:59:53 | tromp__: | and the algorithm's action depend on both those values |
22:00:03 | petertodd: | yes I am! the distance in commodity hardware is about 10cm, I'm shortening it to about cm |
22:00:07 | petertodd: | *about 1cm |
22:00:21 | petertodd: | even less if I use crazy 3d packaging... which I can because this is low power! |
22:00:47 | petertodd: | like, I should actually sandwich at least three dies, hashing in the middle and memory on either side |
22:01:31 | petertodd: | (you may not know this, by direct die-to-die connections are possible these days with techniques like microdots of conductive glue) |
22:01:39 | tromp__: | if 3d memory becomes feasible you'll see it on commoduty hardware first |
22:02:36 | petertodd: | hint: you already do, it gets used for cache and even main memory (in system-on-a-chip designs) |
22:02:47 | petertodd: | problem is those designs aren't optimized for latency |
22:03:22 | petertodd: | instead they *tradeoff* area for latency, and then make it back up by taking advantage of locality with caching |
22:03:43 | phantomcircuit: | petertodd, for scrypt? |
22:03:45 | petertodd: | which means I can create a custom design by optimizing for latency at the expense of some area cost |
22:03:55 | petertodd: | phantomcircuit: we're talking about cuckoo cycle pow |
22:04:07 | petertodd: | phantomcircuit: it's supposed to be asic hard, but it's actually the exact opposite |
22:04:44 | petertodd: | tromp__: anyway, how much hardware design have you actually done? like, any at all? have you even taken a simple digital logic course and played around with some FPGAs? |
22:05:26 | tromp__: | yes i did digital logic as part of my cs curriculum |
22:05:36 | tromp__: | but never played with FPGAs |
22:05:45 | petertodd: | tromp__: yeah, digital logic, but did it talk about implementation level issues? |
22:06:29 | petertodd: | tromp__: I'd highly suggest learning about FPGAs at least before you try to design any more PoW algorithms - at least FPGAs let you see how your logic is physically synthesized |
22:06:46 | phantomcircuit: | petertodd, this seems like it would at least be better than scrypt as a memory hard function |
22:07:17 | tromp__: | scrypt isn't technically a proof of work |
22:07:26 | tromp__: | since it's doesn't have trivial verification |
22:07:29 | phantomcircuit: | main memory access with DDR3 is ~300 ns |
22:07:35 | petertodd: | phantomcircuit: maybe, but the question is memory hard actually what you want? gmaxwell's been pointing out that it's power that matters generally for running costs |
22:08:05 | grazs: | hmm, interesting |
22:08:29 | petertodd: | grazs: quite likely scrypt is actually *worse* for password hardening because it doesn't use as much power as other alternatives |
22:09:17 | grazs: | petertodd: my brain is stuck, I will meditate on this, had kind of an aha-moment though |
22:10:06 | phantomcircuit: | petertodd, if you can shift the costs from marginal to capital that is preferable as it reduces the incentive to be dishonest |
22:10:29 | petertodd: | phantomcircuit: only for non-commodity hardware |
22:10:30 | phantomcircuit: | if you've invested 10m into hardware which wont pay for itself for 10 years you're not going to be dishonest at year 1 |
22:10:49 | petertodd: | phantomcircuit: for asic-soft algorithms that's a solved problem :) |
22:11:59 | phantomcircuit: | petertodd, well yes and no |
22:12:14 | petertodd: | tromp__: anyway, I gotta go - learn some more about digital logic and electronics - you need to be at the point where you can draw a reasonable design at the physical layout level, that is how the transistors are located and what wires connect what, if you want to be able to understand this stuff sufficiently |
22:12:15 | phantomcircuit: | petertodd, as it stands today the capital cost of asics is significant |
22:12:20 | phantomcircuit: | buttt |
22:12:55 | phantomcircuit: | that's going to change |
22:13:10 | phantomcircuit: | power costs are already significant but not the most significant |
22:18:01 | tromp__: | if anyone else has feedback on Cuckoo Cycle, i'd love to hear about it |
22:19:38 | tromp__: | it can't get much worse than being told it's the exact opposite of asic-hard :) |
22:21:14 | azariah4: | would the proposed ethereum contracts make sense if a contract is run on each node receiving a tx? |
22:21:18 | nsh: | additionally, it causes terminal cancer in puppies and war orphans |
22:21:24 | nsh: | :) |
22:21:44 | azariah4: | it seems they would need some way to only run once, or atleast on a limited number of nodes, with e.g. SNARK so other nodes can verify instead of actually running the script |
22:22:36 | azariah4: | especially given the fee per op/storage scheme |
22:27:10 | tromp__: | i've seen mention of SNARK proof size being very manageable at 288 bytes, but what's not clear to me is how much time the verification takes and whether that's practical |
22:28:02 | tromp__: | AFAIK ethereum is vague on how the processing fees for running scripts are actually distributed and to whom |
22:28:55 | tacotime_: | SNARK verification at 288 bytes is trivial |
22:29:04 | tacotime_: | But the parameter file size is not iirc |
22:30:00 | tacotime_: | For the zerocash implementation, the parameters file for their functions was over a gigabyte. |
22:30:47 | nsh: | closer to 2Gb iirc |
22:32:12 | nsh: | (i still can't intuit what this public parameters file _is_ -- how it's used as a resource...) |
22:32:19 | azariah4: | I suppose the fee scheme for contracts in ethereum could be made so that fees for a script can only be collected by the miner who mined the block containing the tx triggering the contract |
22:32:44 | azariah4: | that would make it unlikely (but not impossible of course) for other nodes to run the script |
22:34:42 | tacotime_: | nsh: gmaxwell probably knows more about what the parameters files do exactly, I still don't totally understand SCIPs. My understanding (which could be totally incorrect) is that for any given program you need to generate these parameters and disseminate them with the code you wish to have executed and verified. Then they are used (how?) when you issue arbitary inputs to the code to |
22:34:42 | tacotime_: | generate proofs that verify your given output. |
22:35:28 | tacotime_: | And that the parameters file must arise from a trusted source. |
22:35:59 | nsh: | ack to all of that |
22:36:22 | nsh: | but in terms of the proving and verifying algorithms: what use they make of the pubparam data |
22:36:36 | nsh: | i should just read the papers harder :) |
22:37:12 | tacotime_: | I'd love to do that if I didn't have all these other things to do for my grad studies in another field. :P If you figure it out, ELI5 it to me |
22:37:47 | tromp__: | so the parameter file is like a proof template that require further specification of 7 "points" that get encoded in 288 bytes |
22:40:16 | nsh: | okay, but what does template mean in terms of to a mathematical process? |
22:40:24 | nsh: | s/ to// |
22:44:11 | tromp__: | i imagine it's like the these steps http://en.wikipedia.org/wiki/Elliptic_Curve_DSA#Signature_verification_algorithm in the case of an ECDSA "contract" where (r,s) are the additional points |
22:44:44 | tromp__: | those steps are a lot shorter than 1Gb though |
22:44:55 | nsh: | andytoshi can explain! |
22:45:33 | nsh: | in zk-SNARKS, andytoshi: what is the it, algorithmically, about the public-parameters that is used in the proving and verifying processes? |
22:45:34 | andytoshi: | hi nsh, my logs only update every 12 minutes so i don't have any context |
22:46:01 | nsh: | i've been trying to get a handle on what is special-and-super-handy about the big public parameters in zk-SNARK systems |
22:46:19 | andytoshi: | one sec, i have the snark paper right in front of me.. |
22:46:20 | nsh: | so far i have a sense that it's some kind of common 'landscape' |
22:47:30 | nsh: | and the proof delineates a set of points that allow traversal of the landscape, with traversal being tantamount to verification of the computation's integrity |
22:48:01 | nsh: | but that's a long way from groking (and probably wrong, anyway) |
22:48:30 | andytoshi: | well, it's similar. the first step in the snark proof is to translate from ordinary C into an arithmetic circuit |
22:48:57 | andytoshi: | an arithmetic circuit is a directed acyclic graph where each node is labelled by a semiring operation (addition or multiplication) |
22:49:19 | andytoshi: | so you can construct polynomials in terms of that, and it turns out you can translate any bounded running-time program into such a circuit |
22:49:37 | andytoshi: | so the "landscape traversal" is just following the dag |
22:50:13 | andytoshi: | but there is some more complication because of the memory. circuits do not really encompass reading/writing to memory so there is additional work to do to verify that every read matches an earlier write.. |
22:50:19 | nsh: | right |
22:50:35 | andytoshi: | but in some sense that is incidental, the conceptual miracle happens even without memory |
22:50:55 | nsh: | so what is contained in the 1.7Gb pubparem file? and why is it all needed? |
22:51:03 | tacotime_: | Is certainty in the case of SCIPs probabilistic for some proof of execution? |
22:51:31 | andytoshi: | tacotime_: yeah. but according to the baysians all proofs are probabilistic anyway so this is no problem :) |
22:51:41 | tacotime_: | Heh. |
22:52:18 | andytoshi: | nsh: sorry, i'm flipping through the snark paper to look at how they compute the execution trace to see if there is some 'simple' idea which gives the compression |
22:53:12 | andytoshi: | gmaxwell might know this better than i, it deals heavily in linear pcps which i had never heard of before this paper. so that's some background reading i have to do.. |
22:56:21 | andytoshi: | Section 3 Verifying Circuit Sat via Linear PCPs is the relevant part of the ben-sasson paper @ http://eprint.iacr.org/2013/507 it has a 'high level' overview but i haven't read it well enough to summarize what's going on |
22:58:24 | azariah4: | this paper has some nice gems, hehe |
22:58:50 | azariah4: | "Concrete implementations are upper-bounded by computer memory size (and ultimately, the computational capacity of the universe), and thus their asymptotic behavior is ill-defined." |
22:58:54 | azariah4: | :D |
23:04:00 | HobGoblin: | HobGoblin is now known as Guest99860 |
23:05:29 | nsh_: | nsh_ is now known as nsh |
23:05:45 | nsh: | (dropped out for a moment there; local network troubleshooting for a stupid blue-ray player) |
23:06:07 | andytoshi: | what is the last thing you heard? |
23:06:29 | nsh: | -- |
23:06:29 | nsh: | nsh: sorry, i'm flipping through the snark paper to look at how they compute the execution trace to see if there is some 'simple' idea which gives the compression |
23:06:30 | nsh: | k |
23:06:30 | nsh: | [..] |
23:06:31 | azariah4: | andytoshi: they mention memory consistency though |
23:06:33 | nsh: | Section 3 Verifying Circuit Sat via Linear PCPs is the relevant part of the ben-sasson paper @ http://eprint.iacr.org/2013/507 it has a 'high level' overview but i haven't read it well enough to summarize what's going on |
23:06:35 | nsh: | -- |
23:06:43 | azariah4: | in 2.3.2 |
23:06:51 | nsh: | (missed the whatever was in the ellipsis) |
23:07:10 | andytoshi: | nsh: ok, that's the last thing i said. azariah4: yeah, of course, they solved that problem. but it's not relevant to conceptual questions about snarks |
23:07:57 | andytoshi: | nsh: also i said |
23:08:04 | andytoshi: | gmaxwell might know this better than i, it deals heavily in linear pcps which i had never heard of before this paper. so that's some background reading i have to do.. |
23:11:50 | nsh: | * nsh nods |
23:11:56 | nsh: | thanks in any case |
23:15:20 | andytoshi: | note that the idea about just wrapping a hard-to-verify PoW in a snark encourages centralization because the snarking step is hard to do but only has to be done once per block. so the more hashing power you have the smaller the percentage of power is "wasted" just proving that you did what you claimed. plus you can start building on that PoW before the proof is complete, but others don't get to see |
23:15:22 | andytoshi: | what to build on until you publish the proof |
23:16:31 | maaku: | andytoshi: not to mention incentives |
23:16:43 | maaku: | having a snark step delays annoucement as you have to build the snark proof |
23:17:00 | andytoshi: | maaku: yeah, i had several false starts trying to describe the incentive situation :P it's really confused |
23:19:32 | andytoshi: | the snarkchain model gmaxwell suggested is requiring SHA256(SNARK_PROVE(SHA256(utxo updates + nonce))) < TARGET, which avoids all these problems while also incentivize snark optimization work |
23:21:50 | gmaxwell: | whats this about linear pcps? The general problem with using PCP constructions directly is that they have insane expansion of the proof, so like the proof ends up being larger than the universe, which is generally regarded as a bad thing. If the proof is a linear function, however, like one structured as a hadamard code there is a way to effectively work with the proof in a transformed domain that makes operations compact. So you ... |
23:21:56 | gmaxwell: | ... don't actually have to instantiate the whole proof. |
23:23:08 | gmaxwell: | 14:35 < tacotime_> And that the parameters file must arise from a trusted source. |
23:23:48 | gmaxwell: | ^ not quite— Thats how the GGPR'12 pairing-crypto SNARK stuff works. But its not inherent to verifyable execution. |
23:24:17 | gmaxwell: | The GGPR stuff has an advantage of being the most developed and currently most efficient approach. |
23:25:34 | tromp__: | gmaxwell, you missed my discussion with petertodd on Cuckoo Cycle. i was wondering if you had read the paper and had any feedback on it? |
23:25:38 | gmaxwell: | A not really accurate way to understand it is that it reduces the problem of verifying execution to testing the roots of some polynomials and testing some ratios of polynomials. ... then it instantiates a kind of homorphic cryptosystem so you can do all this in an encrypted domain. |
23:25:53 | gmaxwell: | tromp__: I saw the discussion but I didn't participate because I haven't read the paper. |
23:26:34 | tromp__: | ic, gmaxwell. anyway, i hope you have a chance to read it. i'd like to have your opinion on it |
23:27:40 | gmaxwell: | tromp__: I think petertodd's concers in the first half the the discussion were taking the wrong approach. I understand— without reading the paper— that the approach sounded like its based on finding a kind of structured multicollission? |
23:28:08 | tromp__: | yes, a combined 42-way collission if you like |
23:28:44 | gmaxwell: | Generally collission finding POWs give you asymetric memoryhardness but they have time/memory tradeoffs (e.g. using rho cycle finding). And generally multicollisions have more tradeoff available not less, so I'm interested in how you solve that but I should read the paper. |
23:28:49 | tromp__: | the key insight i think is that the edges must be processed in sequential ortder |
23:29:11 | tromp__: | it's not a collission of many to one |
23:29:34 | tromp__: | it really requires following long chains of pointers |
23:30:02 | gmaxwell: | The later half of PT's discussion is a more meta point which is some new thinking. I now believe (and have been talking some with Colin Percival some about) that the security analysis in the scrypt paper was significantly flawed. :( |
23:30:04 | tromp__: | which is what prevents those rainbow table/bloom filter collission shoirtcuts |
23:31:19 | gmaxwell: | Basically if you model a typical big computing cracking effort, for example, over the whole task of the computation, power costs can come out to something like 95% of the total cost (e.g. on 28nm) |
23:32:28 | tromp__: | cuckoo does about 5x more random memory accesses than hashing ops, so it should do well on power |
23:32:33 | gmaxwell: | So what can happen when you try to make a memory hard KDF is that you increase the silicon costs (part of the 5%) by— say 10 fold or what have you— but if in doing so the power costs to the attacker (for a users tolerance budget) goes down.. that may be a loss. |
23:32:45 | tromp__: | the latency will slow down the rate at which you can hash |
23:33:19 | gmaxwell: | yes, and I'm concerned thats actually bad. |
23:33:37 | tromp__: | in what way is a latency dominated pow bad? |
23:33:50 | gmaxwell: | e.g. you make the 5% 10x (say) more expensive but you make the 95% 1/4th as expensive then the result is a net loss. |
23:34:23 | gmaxwell: | tromp__: shifting cost to silicon over power potentially favors optimized hardware infrastructure. |
23:34:50 | tromp__: | but the power use will be limited by the relatively huge cost of dram |
23:36:20 | tromp__: | imagine how much memory is needed for its power-use to equal that of all sha256 asics in use now |
23:36:54 | tromp__: | it wld probably be more than all memory in existence |
23:37:49 | tromp__: | also, most power use in memory is due to high bandwidth ops |
23:38:24 | tromp__: | if you know you only need to fetch 32bit words, and dpn't fill cache lines with adjacent words, then power cld drop a lot |
23:38:26 | gmaxwell: | tromp__: Well we have an existance proof— TCO wise the gridseed scrypt asics are a bigger improvement over GPUs than sha256 was. I _believe_ that increasing the memory size would actually make that worse, though I'm trying to talk to gridseed engineers about it but chineses/english language barriers are fun. :P |
23:39:03 | gmaxwell: | tromp__: I don't think you are following my argument there. I'm not quite sure how to state it more clearly. |
23:39:42 | gmaxwell: | I don't actually know how it pans out for different parameters, it's also pretty process sensitive, the last few process nodes scaled transistor density better than they scaled dynamic power. |
23:39:47 | tromp__: | i think scrypt has a LOT more parallellism in it than cuckoo |
23:40:37 | andytoshi: | tromp__: an attacker can amortize his hardware costs because he is generating shitloads of keys, and he benefits from lower power. an honest user of a KDF is hit much harder by latency costs and doesn't care about power because honest users don't generate many keys |
23:40:44 | tromp__: | are any scrypt asics in the hands of miners yet? |
23:41:00 | gmaxwell: | I have one sitting in front of me, they aren't widely available to the public yet. |
23:42:03 | tromp__: | the crucial question is, how many scrypt attempts does the chip run in parallel? |
23:42:18 | maaku: | gmaxwell: is it an asic, or an fpga prototype board |
23:42:22 | gmaxwell: | tromp__: but in this case the lack of parallelism helps the attacker. Thats why I was saying that more memory appears to actually make scrypt worse (for actual attack cost) relative to commodity hardware. Though there may be inflection points in the tradeoff. |
23:42:26 | gmaxwell: | maaku: an asic. |
23:43:11 | tromp__: | how much memory is on the scrypt asic? |
23:45:28 | gmaxwell: | tromp__: not sure, still trying to extract data from the people who made it. Each instance of scrypt needs 128k, unless you use a minor TMTO but I'm pretty sure they aren't. |
23:46:42 | tromp__: | right; so they'll be able to run 8192 instances with 1GB of on chip mem |
23:47:11 | tromp__: | now with cuckoo, you can set the memory requirement at 1GB, or 4GB. |
23:47:29 | gmaxwell: | It's in a super cheap QFN package, whole chip costs about $1.25 to make, they've been putting 5 of them to a proto board, which (including regulator losses) draws a bit less than 8 watts, and does 300KH/s which compares not too unfavorably to a year old / middle tier GPU. |
23:47:33 | tromp__: | and they won't be able to run more than a few instances |
23:47:40 | gmaxwell: | thats irrelevent sadly. |
23:48:00 | tromp__: | furhtermore, i don;t see how each instance can run mush faster than with a cpu hooked up to std RAM |
23:48:10 | gmaxwell: | tromp__: did you see andytoshi's illustration of the concern? |
23:48:31 | tromp__: | no, gmaxwell, where can i see it? |
23:48:48 | gmaxwell: | tromp__: oh you can get incredible speedups if you can avoid chip external (pin-count and frequency limited) long busses. |
23:49:02 | gmaxwell: | just the point above: |
23:49:02 | gmaxwell: | 15:40 < andytoshi> tromp__: an attacker can amortize his hardware costs because he is generating shitloads of keys, and he benefits from lower power. an honest user of a KDF is hit much harder by latency costs and doesn't care about power because honest users don't generate many keys |
23:49:50 | gmaxwell: | Basically these analysis must consider both the operating costs and the upfront costs. The hardware cost is amortized. |
23:50:39 | gmaxwell: | unfortunately a total cost model is much harder to do because its much more dependant on the physical instatiation than just trying to count transistors. |
23:50:41 | tromp__: | but amortization requires parallellization |
23:51:27 | tromp__: | no-one has proposed a viable way of parallellizing cuckoo?! |
23:52:02 | gmaxwell: | tromp__: Everything can be parallized. E.g. the attacker acts as two miners. Within the algorithim you are not parallel sure, but there is a maximum scope to this or you lose progress freeness, which is essential for consensus-POW. (maybe it doesn't matter for a KDF) |
23:52:03 | andytoshi: | no, amortization just requires you to run for a long time. |
23:52:31 | gmaxwell: | and yes, as andytoshi points out, just continuting to run for a long time is where the amortization comes from. |
23:53:05 | gmaxwell: | tromp__: I'm not sure what background you have in POW-consensus, do you understand what I mean about progress free being a requirement? |
23:53:14 | tromp__: | andytoshi, you can only run cuckoo for EASYNESS many nonces,, there are only a small number of cycles to be found in that time |
23:53:31 | gmaxwell: | tromp__: you don't just run it once and throw your hardware out, of course. |
23:54:07 | tromp__: | right, you need to use your 1GB of memory for, say, 10secs, and have some small prob of finding a 42 cycle |
23:54:27 | tromp__: | and keep repeating that |
23:54:58 | gmaxwell: | right, and you can also have 100 gb of memory which you run 100 instances in parallel, and then you do this over and over again probalem after problem amortizing the hardware costs and shifting the costs towards operating costs. |
23:55:10 | tromp__: | this imposes a large cost if you want to run 1000s of attempts in 10min, because you need t have many GB now |
23:55:55 | tromp__: | ok, now consider the insalled base of comomodity hardware |
23:56:08 | gmaxwell: | sure but its linearish (actually better since manufacturing scales) upto the point at which you start exausting the earth's resources. :P In any case, I'm not saying this tradeoff loses, but that you cannot compare it soundly without a model for the total cost, not just the upfront costs. |
23:56:31 | tromp__: | there may be 100M PC's that can run cuckoo |
23:56:54 | gmaxwell: | tromp__: right and that installed base gives the defenders an advantage, but that advantage may in fact be completely overcome by the operating costs. |
23:57:05 | tromp__: | so for someone to match that they'd have to invest in 100M *1GB |
23:57:14 | gmaxwell: | You can convert everything in this comparison into dollars (or dollar equivilent joules) if you like. |
23:57:49 | gmaxwell: | And hardware costs are one time, so they amortize. |
23:57:56 | tromp__: | that's WAY harder than in the bitcoin world, where a modest investment can match the combined gpu hashing power in the wrold |
23:58:28 | gmaxwell: | Thats the analysis which I have pointed out several times is flawed. |
23:58:41 | gmaxwell: | The operating costs are the supermajority of the costs, not the hardware costs. |
23:59:04 | nsh: | * nsh wonders . o O {is progress-freeness definitely essential for consensus-POW?} |
23:59:09 | tromp__: | in any case, what you propose is that an "attacker" can basically buy a shitload of PCs to do cuckoo hashingm and amortize their cost |
23:59:39 | gmaxwell: | The advantage you can get in bitcoin comes from the fact that dedicated hardware is enormously more power efficient. (it's also worth noting that the speed of all the current bitcoin parts is predominantly power limited, they could run much faster, but they're require more expensive packages and/or exotic cooling) |