00:36:37 | koval-afk: | koval-afk is now known as koval |
00:44:02 | gmaxwell: | So, is the following possible: There are Q seperate uncoordinated random beacons enumerated in advance, M of them are currently functioning. Each client recieves some subset of at least N of these beacons, and of these up to T may be corrupt (corrupt meaning difference clients got different values for the same beacon). |
00:44:34 | gmaxwell: | Under what values can the clients determine a single consensus composite random beacon? |
00:44:46 | gmaxwell: | This is kind of of like robust secret sharing, but there is no dealer step. |
00:55:13 | gmaxwell: | I suppose this is also related to fuzzy extrators. |
01:01:07 | shinybro: | shinybro is now known as negrodamus |
01:31:45 | Guest52399: | Guest52399 is now known as pigeons |
01:39:45 | negrodamus: | negrodamus is now known as RondaRousey |
01:41:28 | RondaRousey: | RondaRousey is now known as GrandElephant |
03:17:13 | BurritoBazooka: | BurritoBazooka is now known as Burrito |
03:29:14 | stqism: | stqism is now known as ^stqism |
03:29:20 | ^stqism: | ^stqism is now known as stqism |
07:14:17 | stqism: | stqism is now known as rg |
07:14:26 | rg: | rg is now known as stqism |
07:15:17 | stqism: | stqism is now known as rg |
07:15:25 | rg: | rg is now known as stqism |
09:58:10 | dansmith: | dansmith is now known as dansmith_btc |
11:03:42 | c0rw1n_: | c0rw1n_ is now known as c0rw|away |
12:24:11 | nsh: | this paper is somewhat intriguing: http://www.research.lancs.ac.uk/portal/en/publications/coupling-functions-enable-secure-communications(af75e688-e455-417c-8a0b-ab45cf4fa3b4)/export.html |
12:24:46 | nsh: | "Inspired by the time-varying nature of the cardiorespiratory interaction, here we introduce a new class of secure communications that is highly resistant to conventional attacks." |
12:25:06 | nsh: | but the described scheme relies on a (form of) pre-shared key |
12:25:20 | nsh: | not entirely sure what the claimed resistance to attacks actually means |
12:26:08 | nsh: | but the idea of modulating the coupling functions between paired chaotic oscillators as an information channel is pretty cool, anyway |
12:32:06 | gmaxwell: | I read "apply Bayesian inference on the receiver side" as "decoding our signals is intractable. |
12:32:09 | gmaxwell: | " |
12:36:10 | nsh: | * nsh smiles |
12:40:20 | HM: | you guys working on those traffic light control extensions yet? ;) |
13:06:57 | mr_burde_: | mr_burde_ is now known as mr_burdell |
15:54:29 | gmaxwell: | I found this to be a refreshing contrast to the hype: https://news.ycombinator.com/item?id=7547146 |
16:00:35 | anycoin: | anycoin has left #bitcoin-wizards |
16:03:00 | nsh: | hmm |
16:09:25 | amiller: | "Turing-completeness is basically irrelevant either way." yes, finally |
16:15:27 | maaku: | now if only he can reach a similar conclusion about fees in the consensus code |
16:18:45 | hearn: | the core Ethereum design has changed so many times now I can’t keep up anymore |
16:27:11 | kanzure: | is this new? (re: ethereum) http://gavwood.com/Paper.pdf |
16:30:47 | Ursium: | yup it's new |
16:34:16 | gmaxwell: | [context: radical script replacement] So I've been raking my brain a bit about how to get fully flexable transaction signature masking in a clean way, and I've finally got something I like. |
16:34:39 | maaku: | gmaxwell: please share! |
16:35:59 | gmaxwell: | one initial idea is that it would be simple enough to have a OP_TXPARSE that just takes the current transaction and breaks it out and pushes it all to the stack. e.g. "... [value] [scriptpubkey] [n ouputs]... [in sig] [invout] [intxid] [n inputs] [version]" |
16:36:35 | gmaxwell: | and then you could have put the things you want to sign onto a signature stack and thats what checksig signs, ... but a problem there is that making it emulation proof e.g. someone else just pushes the data is tricky. |
16:36:41 | gmaxwell: | Here is I think a clean way to do that. |
16:37:51 | gmaxwell: | OP_CHECKSIGSCRIPT pops pubkey then ScriptSigScript then Sig off the stack. ScriptSigScript is an OP_EVAL style nested script that runs on its own scope-stack, but has access if it wants to the normal stack. |
16:38:26 | gmaxwell: | the checksig hash then hashes the ScriptSigScript and everything that remains in its scope stack. |
16:38:48 | gmaxwell: | so if you want to e.g. sign any transaction that pays at least 1 btc to 1apple then you make the ScriptSigScript look like |
16:39:01 | maaku: | Yes, that's an elegant solution |
16:40:46 | gmaxwell: | OP_TXPARSE OP_DROP 3 OP_MUL OP_DROPN OP_FROM_MAIN_STACK 0 OP_WITHIN OP_PICK ... 1e8 OP_GEQ OP_VERIFY 1apple OP_EQUALVERIFY ... |
16:41:50 | gmaxwell: | e.g. it, at its own option, takes the exact index of the txout being signed from the main stack so that the transaction can be modified and the position of the 1apple output movied around. |
16:43:22 | maaku: | gmaxwell: Aside, the timelock encryption POW idea has dropped from the hardfork wishlist. Is there a new home for it, or an up to date description? |
16:43:27 | gmaxwell: | in any case, I think it's reasonably clean. The worst thing about it, I think is (1) OP_TXPARSE OP_TXPARSE OP_TXPARSE ... is a pretty horrific pattern memory usage wise with a really simplisitc implementation... and (2) patterns like sighash single take a fair number of bytes. |
16:43:56 | gmaxwell: | maaku: it was never on the hardfork wishlist. it's on my altmusings page. I think its far too speculative to say it's a wishlist thing. |
16:46:06 | maaku: | gmaxwell: Ah ok, I see it. Are the outstanding problems major? |
16:46:27 | hearn: | gmaxwell: script already has two stacks, right? |
16:46:55 | hearn: | alternative might be a taint flag on stack elements |
16:47:27 | hearn: | though, actually, i’m not totally sure i understand the emulation problem |
16:47:29 | gmaxwell: | hearn: yea, I'd been thinking of that previously, basically having "tagged elements" which mark where they really came from. But I think the resulting design is fairly complicated. |
16:47:31 | hearn: | * hearn needs to think it through more |
16:47:45 | maaku: | gmaxwell: well we'd probably have to keep a checksig opcode implementation around for compatability anyway, so that could be used directly |
16:47:53 | maaku: | for sighash single and such |
16:48:10 | gmaxwell: | hearn: e.g. imagine today we had a OP_CHECKSIG that took the data being checked from the stack. And the script sig build that data (e.g. by picking it out of the transaction, and hashing it). |
16:48:39 | gmaxwell: | maaku: oh sure sure, but you know, a sign of a good design is that it magically covers the cases that matter well. Doesn't always work, good to strive for. |
16:48:41 | maaku: | hearn: imagine splitting OP_CHECKSIG into OP_SIGHASH (push tx hash on stack) and OP_SIGVERIFY |
16:49:23 | maaku: | hearn: then you could just OP_PUSH OP_SIGVERIFY to steal someones funds |
16:49:40 | maaku: | (after they broadcast a spending transaction) |
16:49:45 | gmaxwell: | hearn: I think any fancy script stuff we've talked about before— the merkelized AST stuff, for example— would have added a scope-stack. |
16:51:07 | gmaxwell: | the emuluation problem wouldn't exist if the scriptpubkey did the masking, but we want the scriptsig to do it. My first thought had been some complex tagging proposal so you tag how the data got onto the stack and include the tags in the hashing, but I went to implement and its a mess. So I went back to think about why it would work in the scriptpubkey masks case, and thats where I got the ScriptSigScript thought. |
16:52:31 | hearn: | right, right, it’s the scriptSig that has to contain smart logic now |
16:52:38 | hearn: | for the sighash flags |
16:52:43 | hearn: | got it |
16:53:19 | hearn: | TXPARSE might still be useful for other things. basic coin covenants etc. |
16:58:26 | gmaxwell: | oh sure, absolutely. There are in pubkey uses for it. thats one of the reasons I like the fully general approach even if there exist specialized mechenisms for sighash all... just because it factors nicely. |
17:00:19 | gmaxwell: | maaku: so for the timelock pow stuff, I handwaved really hard... I think it's fundimentally possible but the details are not worked out. The idea of using EC cracking isn't actually great because of the quadratic TMTO speedup you get (which I think can still leave the mining part progress free, but might have weird results where hashpower consolidations can solve the lock early very often and keep it to themselves. I think you want ... |
17:00:26 | gmaxwell: | ... a cryptosystem where the best attacks are fully exponential time. |
17:01:49 | gmaxwell: | I also waved my hand at the difficulty changing part. Obviously you can't have future target keys change, so if the difficulty is too low you should just need to work on multiple problems... but this might mean that the timelock could end up weak... I waved my arms about some kind of threshold where you make guesses about the future hashrate and if you guess too high the lock just expires late.. but it's still very not well forme.d |
17:03:17 | maaku: | gmaxwell: anything I can do to help nail down these issues? |
17:08:40 | gmaxwell: | Go find an implementation of an asymetric cryptosystem where (1) it is believed that the best attacks are fully exponential (e.g. guess a random key) and (2) it's easy to coerce a random value pubkey which is guaranteed to have a matching secret key. |
17:09:05 | gmaxwell: | EC achieves (2). but fails (1) (rho gives you a quadratic speedup if you have more points) |
17:09:30 | maaku: | "easy to coerce" means what? |
17:11:12 | gmaxwell: | maaku: e.g. I can take a random integer out of it and in constant-ish time convert it into a EC public point (e.g. increment until the curve equation passes), and the result is a valid pubkey, which has a real secret which we don't know. |
17:11:29 | maaku: | ok |
17:11:37 | gmaxwell: | this isn't true, for example, for RSA. (Okay, you'll get a pubkey, but it's likely to be insecure!) |
17:12:12 | gmaxwell: | the way to do this for RSA is the RSA UFO stuff which is hideously expensive. and results in ginormous pubkeys. :) |
17:13:13 | nsh: | in the beginning, god created some ginormous keypairs... |
17:13:37 | hearn: | you’re wanting to do timelock encryption? |
17:13:53 | hearn: | i thought that was now “solved” in the theoretical sense |
17:14:05 | nsh: | how? |
17:14:37 | hearn: | use the candidate indistinguishability obfuscation to obfuscate your AES key, then have your obfuscated program do SPV difficulty checking on bitcoin block headers to verify that the time field is within bounds and goes far enough |
17:15:10 | hearn: | i say “solved” because of course the indistinguishability obfusaction isn’t really implementable at the moment, iiuc |
17:15:59 | maaku: | hearn: I want to do timelock encryption proof-of-work |
17:16:41 | hearn: | that sort of is …. the unforgeable passage of time is recorded by the PoWs on the block headers |
17:16:53 | hearn: | it’s indirect timelock encryption with PoW |
17:17:33 | nsh: | the magic-program-that-spits-stuff-out-when-arbitrary-conditions-are-met dependency is still quite strong :) |
17:17:38 | gmaxwell: | hearn: yea, it's not, and it's very hocus pocus at the moment. (actually, andytoshi and I have a nice potshot at the proof in the paper about that. :) ). |
17:18:16 | hearn: | nsh: hey, i did say theoretical sense :) |
17:18:21 | nsh: | * nsh smiles |
17:18:45 | hearn: | but if they *can* make iO work then, well, you get timelock encryption “for free” |
17:18:51 | gmaxwell: | hearn: header POW doesn't (without indistinguishability obfusaction) let you have a private key that is only known after some date, alas, and even with IO it doesn't appear possible to escape trusted initilization. |
17:18:55 | hearn: | * hearn wishes they’d hurry up and make PIR work |
17:19:14 | hearn: | right |
17:19:29 | andytoshi: | hearn: that is CRS, the person who implements the obfuscation can slip back doors and shit in there, see the paper on 'punctured programs' which uses this kind of back-door trickery to prove security |
17:20:16 | hearn: | but i do not know of any more likely candidate for real timelock encryption, outside of secure hardware based solutions |
17:20:17 | andytoshi: | also i'm not 100% that obfuscation hides the key, though thx to gmaxwell's comments i have found that we can do it with PKE so i'm reasonably confident |
17:20:58 | andytoshi: | https://eprint.iacr.org/2013/454 is the punctured programs paper, it is a pretty neat proof technique |
17:21:14 | nsh: | ty |
17:22:11 | gmaxwell: | Hey, information theoretic PIR works! like .. you can actually run code and it is usable. The computational PIR is ... well. yea. Still better that the candidate IO paper, which works by defining a IO for only a narrow class of circuits, and then using the IO to house a decryption key for FHE, and then your run your program in FHE and use the IO to decrypt the output. :) Actually their (kind of crazy, IMO) proof requires you run ... |
17:22:17 | gmaxwell: | ... the FHE program twice with different keys and record a trace of its execution and feed that into IO... because hey, if you're already doing a completely impratical computation, why not do it twice. |
17:22:21 | gmaxwell: | :P |
17:23:00 | andytoshi: | :) but with your comments, i (think i) proved that you don't have to do it twice, i tried to drop by waters' office today but he was busy.. |
17:23:04 | maaku: | andytoshi: your thoughts on ticking timelock POW would be appreciated too |
17:24:20 | andytoshi: | maaku: i've thought a bit about it. i've got a blog post http://wpsoftware.net/andrew/blog/?post=impossible-crs which uses the timelock scheme hearn suggested as an argument that CRS model is not equivalent to standard model (that is you cannot eliminate CRS assumptions in general) |
17:24:27 | gmaxwell: | andytoshi: well it's still kind of circular right? the proof is that if that other thing worked, the simpler version must work too. But they needed the other thing to escape the impossiblity proof, so I think we've divided by zero and the universe ends. :) |
17:25:28 | andytoshi: | gmaxwell: well, the argument in the paper wasn't -proven necessary- to escape the impossibility proof |
17:25:53 | maaku: | andytoshi: yeah I'm very skeptical of this obfuscation stuff. I meant more of the traditional find-private-key-as-pow sort :) |
17:25:56 | andytoshi: | i don't think it's circular |
17:26:20 | gmaxwell: | andytoshi: Okay fair. |
17:26:31 | andytoshi: | but i think it might mean that the impossibility proof implies some sort of impossibility for simulation-strong PKE |
17:26:46 | nsh: | simulation-strong? |
17:26:58 | andytoshi: | (whatever simulation-strong PKE means, i'd have to chase the simulation secure obfuscation definition through the proof to see what i get) |
17:27:14 | nsh: | * nsh googles |
17:27:15 | andytoshi: | nsh: one sec, that's not quite the right term.. |
17:28:13 | andytoshi: | nsh: 'virtual black box' obfuscation means that if i give you the obfuscated program, you can't learn anything more than if i gave you access to a program-evaluation oracle "a simulator" |
17:28:31 | nsh: | mm |
17:28:54 | andytoshi: | nsh: whereas 'indistinguishability obfuscation' means that you cannot distinguish between two functionally-identical circuits, given only their obfuscations, assuming you are computationally bounded |
17:29:16 | nsh: | right |
17:29:48 | andytoshi: | so what gmaxwell and i did gives a link between the IND-CPA security of FHE (a computationally bounded attacker cannot distinguish between plaintexts given only their encryptions for the FHE) |
17:29:58 | andytoshi: | and indistinguishability obfuscation |
17:31:30 | andytoshi: | but if the FHE had some strong simulation-y security property, the same argument should give black-box obfuscation, which is impossible. |
17:31:43 | hearn: | it’s only impossible for some very carefully chosen programs |
17:31:46 | andytoshi: | maybe that is not true, it's just an intuition |
17:31:51 | hearn: | it may not be impossible for actual useful programs |
17:32:14 | andytoshi: | hearn: right, and there is a result to the effect that -if- black-box obfuscation is possible for some class, then indistinguishability obfuscation gives you it o.O |
17:32:39 | nsh: | one paper suggests a "pseudo-entropy" measure/bound that might define the possibility threshold |
17:33:00 | nsh: | "The Impossibility of Obfuscation with Auxiliary Input or a Universal Simulator" |
17:33:28 | gmaxwell: | It's a VERY wanky argument though. |
17:34:06 | andytoshi: | i can't seem to find it.. |
17:34:42 | nsh: | ( http://eprint.iacr.org/2013/665.pdf ) |
17:34:57 | gmaxwell: | Basically it says, say you have a BB-ofb that works on program X. so you feed BB-obf(X) and X into your ID-OBF .. and the two results must be indistingushable if the ID-OBF worked. So therefore for X ID-OBF is just as good as BB-obf. |
17:35:05 | andytoshi: | GR07 |
17:35:26 | gmaxwell: | It's the sort of argument your mom gives you when you explain what you're working on and it invalidates years of work. :P |
17:37:15 | andytoshi: | gmaxwell: you are referring to the "ind-obfs implies BB" result "On best-possible obfuscation" goldwasser/rothblum 2007? |
17:39:17 | gmaxwell: | yes the result that ID-OBF is the best possible obfuscation. Because if there were some better obfuscation, and ID OBF would make your unobfed program indistinguishable from the best possible. Therefor its at least equal to the best possible. |
17:39:39 | gmaxwell: | Note that you don't have to know anything about... well.. anything... to make this argument. It's a construct of pure logic. |
17:39:43 | andytoshi: | and as you pointed out, that argument is robust to the definition of "best" :) |
17:40:39 | gmaxwell: | yea, usually when I find myself making arguments of pure logic its a red flag that I've done something dumb with my definitions or assumptions to create a trivial tautology. |
17:41:31 | andytoshi: | here i don't think you have, i think we're on solid ground re the definitions but they are way stronger than intuitively obvious.. |
17:44:12 | andytoshi: | a reason that things are not tautological is that "ID-OBF is the best possible" depends on "best possible" actually being attainable |
17:44:36 | andytoshi: | maybe you don't like that, i did an entire math degree and have become desensitized to such arguments :P |
17:44:55 | nsh: | constructivist cat is watching you obfuscate on obfuscatability |
17:45:12 | nsh: | :) |
17:46:09 | gmaxwell: | There are degrees of liking. |
17:49:16 | andytoshi: | my personal suspicion is that BB is actually impossible, but this result that ID-OBF("check the FHE proof and decrypt") hides the key from a computationally bounded attacker suggests that ID-OBF is just as useful in practice |
17:53:08 | nsh: | just how hidden through computational complexity should be possible to make explicit in terms of algebraic properties of the underlying mathematical structure and assumptions about the relative cost of operations |
17:53:50 | gmaxwell: | andytoshi: hah well subject to the not actual usefulness of FHE due to performance problems. :) |
18:27:36 | maaku: | gmaxwell: you may have to explain to me how the quadratic speedup of rho is a problem. it seems to me that whether you end up in the right cycle is still random, so it's sufficiently progress-free |
18:28:05 | gmaxwell: | maaku: the mining part is progress free because really the cracking is a side effect. |
18:28:28 | gmaxwell: | But as you do your cracking you record points and the cracking itself is not progress free... |
18:31:12 | maaku: | my holdups are more about how you make difficulty adjustments and keep people from just cracking the keys they need to unlock something |
18:31:22 | maaku: | i know we discussed that in the past but now I can't find the log :( |
18:33:25 | gmaxwell: | just cracking the keys they need to unlock something = first you encrypt with every key between now and then, so its cumulative. |
18:33:55 | gmaxwell: | well what you do is have N parallel problems and as you turn on more difficulty, you start working on the parallel problems (keeping that progress free may be fun). This doesn't prevent someone from rushing ahead at the lowest problem level, so what you can do is guess the future difficulty, and encoded with all the problems you think people will have worked on... and to hedge against you overestimating the difficulty you also ... |
18:34:01 | gmaxwell: | ... encrypt in parallel with later dates but assuming lower difficulty. |
18:34:11 | gmaxwell: | which is still perhaps a little too much 'sum work' instead of 'sum time' |
18:36:13 | maaku: | Yeah very hackish. I somehow remembered you having a more elegant / convincing solution :P |
18:37:00 | maaku: | The scaling problem I see is letting this scale by orders of magnitude without proof sizes also scaling |
18:39:45 | maaku: | hrm... i guess you could increase the curve size |
18:40:00 | gmaxwell: | well thats not so bad perhaps e.g. your next level up could be targeting an unknown key in a larger system... so you could have some thresholding there. (for EC it's kind of lame because parameter selection is not so easy... having .. 64 bit curve, 65 bit, 66 bit, etc. not so fun. :) |
18:42:39 | gmaxwell: | well I suppose it's not that big a deal, it's not like you'd need to generate more than two dozen curves. |
18:43:38 | gmaxwell: | Though as I said, I don't think EC is ideal simply because a big miner is not progress free on the actual cracking. And if you make a rule that the solution is a automatic block (to ensure people publish it!) then it creates a non-progress freeness overall. |
19:35:28 | stqism: | stqism is now known as rg |
19:35:39 | rg: | rg is now known as stqism |