00:06:27ryan-c:gmaxwell: do you still have that ibm 4764 and did you ever get the dev kit for it?
00:07:20gmaxwell:ryan-c: I do, and I've been unable to get a devkit, though I've only put a couple hours into trying. Actually I have two more of them now.
00:08:46ryan-c:gmaxwell: Would you like me to inquire about the devkit for you? I know an IBM lifer.
00:09:12gmaxwell:That would be pretty awesome.
00:09:25ryan-c:and by know, i mean that I'm marrying his daughter in a few months
00:09:43sipa:maybe wait until you're married
00:09:48kanzure:"what's a devkit between inlaws?"
00:09:51gmaxwell:hah
00:09:51kanzure:yes
00:09:53ryan-c:lol
00:11:01gmaxwell:ryan-c: my plan after the first failing attempt was to just contact ibm sales with my business hat on, but .. ugh sales. At the moment I'm wating for that before I have a concrete reason to buy some immediately; so that I don't waste their time.
00:12:09gmaxwell:they're pretty expensive when not found as mysterious surplus parts, about 10 grand. Not an unreasonable cost for a business, but decidely outside of normal expirement parts pricing.
00:13:08ryan-c:I came across you complaining about not being to get a devkit when I was googling for information about what HSMs might support secp256k1 (someone asked me about it).
00:13:32ryan-c:gmaxwell: why'd you say that pretty much everything other than the IBM ones were crap?
00:14:03gmaxwell:yea... HSM market sucks, supporting secp256k1 isn't the half of it.. if the HSM can't embed business logic then it's not really very useful.
00:14:18ryan-c:gmaxwell: yeah, i agree about the business logic bit.
00:15:00ryan-c:kinda pointless to protect the key from theft whilst allowing arbitrary operations to be done with it
00:15:17kanzure:er, but nobody is proposing you hook up your HSM straight to the interwebs
00:15:20ryan-c:especially for cryptocurrency
00:15:35kanzure:presumably you have some authorization process before allowing a signature (absent some business logic stored on the hsm)
00:15:55gmaxwell:sure but if that was adequate you could just cut out the hsm and put your keys in that.
00:16:13ryan-c:kanzure: if you want to allow automated signing based on some rules, the hsm provides little, if any, security benefit.
00:16:24kanzure:ah right, i keep forgetting the default assumption is automated signing
00:16:31kanzure:my hsm use does not involve automatic signing
00:16:48ryan-c:it maybe makes more sense where the things that you could do with the key could be undone, or where you don't want the sysadmin to steal the key.
00:16:51kanzure:it's really strange, gmaxwell has brought this up in the past too, i wonder why i keep forgetting
00:30:57phantomcircuit:ryan-c, you can rate limit those arbitrary operations which could be useful
00:31:09phantomcircuit:but substantially less useful than generic business logic
00:31:15ryan-c:phantomcircuit: yeah
00:31:23kanzure:if you have bitcoin blockchain validation stuff happening in the hsm, you could rate limit outgoing bitcoin transaction amounts, that would be cool
00:31:35phantomcircuit:(that is effectively how intersango's hot wallet operated)
00:31:46ryan-c:kanzure: that would be "business logic"
00:31:48kanzure:where did you put intersango's hot wallet?
00:31:54phantomcircuit:separate computer which received payment instructions and applied rate limiting
00:32:07kanzure:home network lan? :p
00:32:37phantomcircuit:kanzure, for a long time it was in my bedroom on a gentoo hardened box and instrumented to shutdown if anything weird was detected
00:32:47gmaxwell:it would be pretty straight forward to get an ibm cryptocard to do blockchain clocked value per time limits.
00:32:49phantomcircuit:(pretty crap tastic hack which checked iptables counters)
00:34:38ryan-c:I have some stuff I cooked up that runs on a raspberry pi that sends stuff over serial and lets code on the pi actually build the transaction (which must follow a defined policy)
00:35:19ryan-c:If I'm going to be super paranoid, I don't want to expose the network stack. :-P
00:35:32gmaxwell:rpi unreliability is annoying, also take care to not go through all that trouble and have a stupid timing attack vector.
00:36:16phantomcircuit:https://0bin.zertrin.org/paste/c37d7276dd833645a4403b9565da06f9f08b7589#yflkKzPJ6Dek4RybcHGul8NAuMeWmJy3BGNf188j5pY=
00:36:17ryan-c:gmaxwell: If i were putting it into production, I would use a real computer.
00:36:23phantomcircuit:any ideas why that wont relay?
00:37:35gmaxwell:phantomcircuit: my node took it, put it in the mempool.
00:37:51phantomcircuit:hmm
00:37:54gmaxwell:that node is on git master as of this am.
00:38:05gmaxwell:you already have a conflict?
00:39:01phantomcircuit:oh i see
00:39:07phantomcircuit:it's been 36 minutes
00:39:10phantomcircuit:the last block was btc guild
00:39:21phantomcircuit:i bet they're running rules old enough that the op_return is non standard
00:39:26ryan-c:gmaxwell: ugh, timing attacks, they ruin many things
00:39:47phantomcircuit:gmaxwell, the rpi sdcard holder bends the sdcard
00:40:02phantomcircuit:i went through a number of cards before i realized what was going on
00:40:22ryan-c:I haven't had reliability issues with mine, fwiw
00:40:24phantomcircuit:flexes is probabyl a better term
00:40:46ryan-c:i had them in cases which don't have anything putting pressure on the sd card
00:41:19phantomcircuit:ryan-c, the sdcard holder on the rpi holds the edges of the card and applies pressure uniformly to the pins
00:41:21ryan-c:I like the low profile microsd adapters
00:41:27gmaxwell:Anyone off the top of your head know of a solver that takes a large number of N-dimensional binary vectors, and a weight for each vector. And finds a set of vectors that have a 1 in each dimension with minimum weight?
00:41:28phantomcircuit:which causes the card to flex in the middle
00:41:50gmaxwell:(believe it or not that question is somewhat on-topic)
00:42:16ryan-c:gmaxwell: like a constrain solver?
00:42:20ryan-c:constraint
00:42:20kanzure:sounds like something a constraint solver would do
00:42:22kanzure:doh
00:42:49TheSeven:oh well, why do people still use raspberry pi's...
00:42:58gmaxwell:yes, its a constraint solver, but I was hoping for one optimized for this task. (I mean I could just use minion to do it, but I expect it would be slow)
00:43:45ryan-c:TheSeven: I think the answer is "they're cheap". I was looking into the odroid-c1 recently, which looks pretty awesome
00:43:59TheSeven:yes, I was thinking about that one (or possibly an olinuxino) as well
00:44:07sipa:gmaxwell: so per 'column' an OR over all the vectors should be one, while minimizing the sum?
00:44:07ryan-c:about the same price as an rpi too
00:44:19gmaxwell:sipa: yes.
00:44:28kanzure:gmaxwell: i've been meaning to look into this one eventually http://www.gecode.org/
00:44:34sipa:i've used gecode
00:44:56kanzure:http://www.gecode.org/doc-latest/reference/index.html
00:44:57gmaxwell:kanzure: yea, gecode usually requires a bunch of per task coding.
00:45:01kanzure:ah interesting
00:45:06kanzure:i was looking into gecode for parametric cad modeling reasons
00:46:18gmaxwell:sipa: what I'm doing is I have a kabazillion test vectors for libsecp256k1, and for each vector I have a bitmap of every outcome for every branch in the code. And I want to find the smallest set of test vectors that hits all of them. (so hopefully it will be small enough to just stick in tests.c)
00:47:26sipa:doesn
00:47:29kanzure:could you do that by just looking at every line of code to decide how to hit those branches?
00:47:29sipa:doesn't sound hard
00:47:40sipa:but i have no clue about the performance
00:48:00gmaxwell:well right, I'd also like to do this for several hundred million vectors input. :P
00:48:13gmaxwell:I suppose I can probably reduce it a lot with a preprocessing step.
00:48:49gmaxwell:kanzure: not to give the smallest set of tests.
00:49:13kanzure:if you have 30% (unknown) redundancy that doesn't sound too bad
00:54:20sipa:gmaxwell: define 'kabazillion' ?
00:55:10kanzure:it is 1/1024 of kibizillion
00:55:12gmaxwell:sipa: several hundred million probably. I dunno maybe less, because I can eliminate duplicates. The upper bound is pretty high.
00:56:05sipa:preprocession by removing all results that subsume some other result would be useful already
00:56:13sipa:*are subsumed by
00:56:44gmaxwell:yea, that might make it small enough that I could just throw minion at it with no worry. Didn't think of doing that until I typed it out here.
00:57:03gmaxwell:well must be subsumed and <= in size.
00:57:10sipa:'size' ?
00:57:56gmaxwell:sipa: my tests are encoded signatures, and I'm trying to minimize the total size.
00:58:31sipa:i just mean individual signatures, look at which branches they trigger
00:58:44gmaxwell:(with a variable length encoding; which is a win, since most cases can be triggered by small signatures.)
00:59:06gmaxwell:sipa: right and what I'm saying is that if one triggers a subset of the branches but it is also smaller, it may be in the optimal solution.
00:59:07sipa:some signatures will trigger a superset of branches of a single other
00:59:25sipa:oh
01:17:26ahmed_:ahmed_ is now known as ahmed_sleep
02:14:53DoubleIn100Hrs:I just got paid on 4 transactions! Get in before the Ponzi is over!!! http://cryptodouble.com/?ref=aZk0V
02:15:06sipa:not here
02:15:12DoubleIn100Hrs:DoubleIn100Hrs has left #bitcoin-wizards
03:00:32nub33:nub33 has left #bitcoin-wizards
03:13:35roconnor:hey sipa
06:19:13bryanvu:bryanvu has left #bitcoin-wizards
07:30:27lclc_bnc:lclc_bnc is now known as lclc
08:08:11lclc:lclc is now known as lclc_bnc
08:40:19lclc_bnc:lclc_bnc is now known as lclc
08:59:34Pan0ram1x:Pan0ram1x is now known as Guest79176
09:05:15verne.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
09:05:15verne.freenode.net:Users on #bitcoin-wizards: andy-logbot Guest79176 CoinMuncher tacotime pgokeeffe e1782d11df4c9914 hashtag_ coiner damethos NikolaiToryzin Graet ucerron todaystomorrow paveljanik bvu wizkid057 TheSeven RoboTeddy jaekwon thrasher` koshii DoctorBTC dansmith_btc ryanxcharles d1ggy__ Shiftos waxwing Starduster austeritysucks PaulCapestany luny Transisto midnightmagic butters starsoccer huseby samson_ shesek Emcy nubbins` GAit Grishnakh Luke-Jr SubCreative dgenr8 mbelshe
09:05:15verne.freenode.net:Users on #bitcoin-wizards: BrainOverfl0w yoleaux burcin [d__d] phedny so crescendo Taek @sipa sneak azariah kinlo Guest38445 warptangent andytoshi comboy K1773R asoltys_ EasyAt heath wumpus gwillen nanotube Eliel Anduck BlueMatt coryfields BananaLotus bobke kanzure phantomcircuit fenn warren ahmed_sleep nickler espes__ v3Rve helo petertodd jaromil @ChanServ Krellan eric brand0 isis @gmaxwell harrow lclc Greed PRab tromp_ forrestv jcorgan CryptOprah btcdrak sdaftuar
09:05:16verne.freenode.net:Users on #bitcoin-wizards: Apocalyptic HM2 go1111111 s1w mr_burdell otoburb _Iriez gnusha mappum [\\\] BigBitz hktud0 roasbeef_ null_radix wiz optimator sl01 stonecoldpat copumpkin maaku Logicwax bosma spinza morcos fluffypony berndj Keefe jgarzik a5m0 yrashk Meeh Hunger-- coutts hguux_ gavink rasengan btc__ cfields lnovy artifexd michagogo kumavis Muis LarsLarsen nuke1989 Tjopper Guest8623 hollandais iddo nsh JonTitor poggy throughnothing sadgit pi07r lechuga_ atgreen
09:05:16verne.freenode.net:Users on #bitcoin-wizards: epscy MRL-Relay bbrittain livegnik gavinandresen justanotheruser jbenet Cory Dyaheon- pigeons Fistful_of_Coins tromp AdrianG Alanius smooth ryan-c TD-Linux catcow danneu
09:23:14lclc:lclc is now known as lclc_bnc
09:37:32ahmed_sleep:ahmed_sleep is now known as ahmed_
10:00:39lclc_bnc:lclc_bnc is now known as lclc
10:18:19fluffypony:has the validity of this ever been discussed: http://zerocharactersleft.blogspot.co.at/2014/10/zero-confirmation-bitcoin-transactions.html
10:24:12sipa:i don't see what it is trying to achieve
10:25:55fluffypony:no idea, someone just mentioned it to me
10:26:06fluffypony:doesn't seem very zero-conf
10:26:10sipa:it sounds like it is creating a refund transaction with an unconfirmed input... and then claims it is a solution to double spending? wtf
10:28:56sipa:oh i see, it just tries to explain the principle of building transactions that use unconfirmed inputs
10:29:54sipa:nothing new - but it only works for services that don't do more than send money back/further as a result of succesfull transactions
10:30:28sipa:satoshidice has used that technique for years, and the only result was their customers being hurt by double spending instead of them
10:46:55mbelshe_:mbelshe_ is now known as mbelshe
11:08:21midnightmagic:for a while it was them
11:25:24lclc:lclc is now known as lclc_bnc
13:32:38lclc_bnc:lclc_bnc is now known as lclc
13:34:49Fistful_of_Coins:Fistful_of_Coins is now known as o3u
13:35:18o3u:o3u is now known as Guest69806
13:35:37Guest69806:Guest69806 is now known as Fistful_of_coins
13:46:05fanquake:fanquake has left #bitcoin-wizards
14:24:57lclc:lclc is now known as lclc_bnc
15:48:56roconnor:sipa: Can I argue that broken crypto design and how to avoid it is ontopic here?
15:49:17sipa:sure
15:50:11sipa:not serializing something for min/max looks broken, as it can collide with cases where min/max are specified?
15:50:41roconnor:to recap: https://github.com/openssh/openssh-portable/blob/master/kexgex.c#L72 is the function that hashes a bunch of data for the server to sign for authentation during one of the key exchange methods, specificall the one described in rfc 4419.
15:51:00roconnor:In text it is
15:51:01roconnor:H = hash(V_C || V_S || I_C || I_S || K_S || min || n || max ||
15:51:02roconnor:p || g || e || f || K)
15:51:17roconnor:But there are actually two different methods described in rfc 4419
15:51:31roconnor:SSH_MSG_KEX_DH_GEX_REQUEST_OLD and SSH_MSG_KEX_DH_GEX_REQUEST
15:51:47roconnor:using a different header distingishes them.
15:52:19roconnor:and the difference is that the old method
15:52:21roconnor:Instead of sending "min || n || max", the client only sends "n". In
15:52:23roconnor:addition, the hash is calculated using only "n" instead of "min || n
15:52:24roconnor:|| max".
15:52:52roconnor:so that means a hash H = hash(V_C || V_S || I_C || I_S || K_S || n || p || g || e || f || K) is used with the old method
15:53:16roconnor:but, as you've pick up on, the header used to select between the old method and the new method isn't part of the data being hashed.
15:53:40sipa:ha
15:53:50roconnor:So we can try to play a game where a MITM substituse the old protocol for the new protocol by changing the header
15:54:38roconnor:and tries to create a situation where he gets a signature for the old protocol from the server and gets the client to validate the same serialized data, but under a different interpretation
15:55:42roconnor:one where p, g, which are supposed to be a prime number for a field size and g is a generator of a large multipicative subgroup, are different values
15:56:01roconnor:perhaps values where discrete logs are easy to compute because the multiplicative subgroup is small.
15:56:42roconnor:anyhow, I tried for half an hour with a friend yesterday, but the conclusion was that there isnt' enough leway in the protocol to make this work.
15:58:47roconnor:Anyhow, even if it is fine; this doesn't really inspire confidence that it takes 30 minutes of understanding incidental details of serialization formats to believe the protocol is secure.
15:59:26roconnor:If the serialization was different, if f and e were swapped, perhaps something might be possible. Probably not, but it would be easier.
15:59:50gmaxwell:TLS/SSL has had several bugs of this type too. There is some propostal (IIRC for TLS 1.3) to make the session keys basicaly hash a transcript of ALL the prior headers, because figuring out which ones were needed is apparently beyond human ability.
16:00:37roconnor:gmaxwell: hah, really?
16:00:57roconnor:This was literly the first thing I looked at in OpenSSL and it was already suspicous.
16:01:16roconnor:Not to blame OpenSSL, it is rfc 4419 that is broken.
16:01:19roconnor:er OpenSSH.
16:01:31gmaxwell:There was some ranty complaint I'd responded to recently that included an argument that Bitcoin was "bad" because it didn't have adequate ciphersuite agility. (which isn't really true but whatever). In my response I pointed out that it looked like agility is actually responsible for more security weaknesses than supporting bad ciphersuites.
16:03:44roconnor:My rule of thumb is, if you have an if statement in your data format parser and it is choosing a branch based on data that isn't in the data blob, you are going to have a bad time.
16:04:42roconnor:A bit of a problem is that some of these data formats don't have parsers, but if a parser would have such an if statement, you are still going to have a bad time, even if the parser doesn't exist.
16:04:54sipa:advantage to encryption algorithms (vs hashing): your decoding will fail in this case :)
16:06:27gmaxwell:roconnor: in general these hashed things should also be application distinguished. Otherwise you get some genius user that reuses a key from one application in another; and you find out there there is a potential emulation where you can get the other application to act as a messages of doom signing oracle.
16:07:11roconnor:Absolutely, though openssh appears to do a resonable job regarding that.
16:08:35gmaxwell:so if that hash were keyed with "RFC4419.3.1" it likely would have been okay, even missing an important field.
16:12:43gmaxwell:https://bitcointalk.org/index.php?topic=918018.0 "Bi-directional micropayment channels with CHECKLOCKTIMEVERIFY"
16:12:59roconnor:gotta go. ciao.
16:16:02hearn_:hearn_ is now known as Guest56620
16:25:08NewLiberty:NewLiberty is now known as NewLiberty-afk
16:25:42catlasshrugged:catlasshrugged is now known as Guest70943
16:42:04NewLiberty-afk:NewLiberty-afk is now known as NewLiberty
16:43:46lclc_bnc:lclc_bnc is now known as lclc
17:35:04lclc:lclc is now known as lclc_bnc
17:54:37Emcy_:anyone know where/how gavin came up with the 20mb figure for new blocksize?
17:54:45Emcy_:arbitrary?
17:56:28Emcy_:from that post it seems like he spent a while showing that a few yrs old hardware can handle quite bigger blocks but we already knew that, really. The issue is bandwidth.
17:58:47Emcy_:the issue of bandwidth seems to have been left almost as an after thought :/. I could tell you that 20mb blocks would preclude me running a node full time on the internet service i have right now today, let alone the future
17:59:53gmaxwell:Emcy_: I don't think we knew it in a strong sense, but we did assume it and would have been surprised otherwise. Back in 2013 I had a conversation with Gavin and a number of others at Bitcoin 2013 and I expressed the view that I think that kind of testing is a hard prereq to even having a discussion about the wisdom of doing anything; its simply to easy to do the test as an initial check to see w
17:59:59gmaxwell:here the wheels fall off. So, indeed, while it doesn't address the Important Issues; it's still a useful and interesting thing to do.
18:00:29Emcy_:sure, the tests have to be done
18:00:51Emcy_:its a good thing to show definitively what we expected to be the case
18:01:04Emcy_:im just worried he is still too dismissive of the bandwidth issue
18:01:23Emcy_:of that he bases his conclusions around an assumption of google fiber or something
18:02:01Emcy_:lots of people have data caps as low as 200gb/m. Mine is actually less (and it depends ont he time of day, which is also getting more common)
18:12:47Emcy_:I AM FRETTING ABOUT IT
18:12:51Emcy_:ok im going to sleep
18:13:09gmaxwell:Probably of some interest here, OpenSSL bug Bignum squaring may produce incorrect results (CVE-2014-3570) has been de-embargoed. This bug was discovered as part of the development of libsecp256k1. I've comment some about it on HN: https://news.ycombinator.com/item?id=8857398
18:16:55nsh:* nsh perks
18:18:42midnightmagic:gmaxwell, sipa: will you guys be re-adding the comparison testing back into libsecp256k1 now?
18:21:19gmaxwell:probably not, actually. We're still doing high level (full system) comparison testing, just not unit (basic operation) level. We don't really have so much 1:1 matching of the basic operations anymore in any case. E.g. we don't need a generic bignum implementation anymore.
18:33:26midnightmagic:gmaxwell: is the testing that was pulled out available anywhere or could it be of use to a third-party ec library?
18:36:25gmaxwell:it's in the git history. but it requires access to 'internals' do it's not easy to just use with things.
18:37:22midnightmagic:ah, that's nice then. thank you, history is perfect.
18:41:12nsh:gmaxwell, what was the mistake in BN_sqr.c?
18:41:21nsh:having trouble finding the fix in openssl's commits
18:41:49nsh:(also trying to find out if libressl is affected)
18:42:05sipa:nsh: in crypto/bn/asm/x86_64-asm.c iirc
18:42:19nsh:oh, ah
18:42:20sipa:in a macro with asm.code
18:42:24gmaxwell:nsh: almost certantly.
18:42:32gmaxwell:sipa: IIRC the C code was wrong too. no?
18:42:50gmaxwell:(been a while, we threw this over to openssl months ago)
18:43:10sipa:yes
18:43:15gmaxwell:10:42 < sipa> the C code was #if 0'd out, but yes
18:43:17sipa:it was #if 0'd out
18:43:25gmaxwell:Right, relevant for libressl perhaps.
18:49:53gmaxwell:I'm really pretty proud of our testing in libsecp256k1; when redirected to OpenSSL in a blackbox-ish manner, it found a bug that had probablity p=2^-128 for 'random' inputs. This was part of what I was referring to in the 0.10 release nodes when I wrote "we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL".
18:50:47nsh:hmm
18:55:36midnightmagic:well it is pretty neat. congratulations on finding a fundamental problem.
18:56:24nsh:squaring a bit number looks very difficult
18:56:44nsh:i wonder how much of that is an artifact of the x86 legacy and how much is just mathematics
18:57:03nsh:you'd think it'd be easy to formally prove the correctness of a limbed squaring function
18:57:42zooko:gmaxwell: nice work!
18:59:14nsh:but otoh i inhabit a wondrous fairy-tale land of theory and whimsy unsullied by having to make things, or worse, make them work
18:59:16faraka:would it make sense to implement a zkp to audit exchange transactions? to the same end of peter todds auditing method for exchanges?
18:59:43nsh:audit in what sense?
19:00:40faraka:let's say i have a merkle chain of n items, is it possible to create a zero knowledge proof of the existence of a correct chain between hash 1 to n?
19:03:00nsh:hmmm
19:03:36nsh:strangely this came up at congress
19:06:23faraka:link?
19:07:37nsh:in discuss, which unfortunately i don't remember much detail of, sorry
19:08:17nsh:but afaik, you can make produce a ZKP of a route-to-node in an authenticated data structure under some or other model
19:08:45nsh:andytoshi or gmaxwell or petertodd would know infinitely more than me on the matter
19:09:48nsh:in the context of exchange settlements you just want to prove consistency, which is an easier problem in general
19:20:12ajweiss:did you guys happen upon a value that squared wrongly or was that found by auditing openssl?
19:20:43midnightmagic:ajweiss: https://news.ycombinator.com/item?id=8857683
19:22:07gmaxwell:ajweiss: it was a result of "greybox" testing, I suppose you could say.
19:22:47gmaxwell:Of course we've also audited OpenSSL, but there is only so deep someone who has a goal of something other than openssl is going to go into their optimized math code. :)
19:23:25catlasshrugged:@kristovatlas: Updated SharedCoin advisory: Blockchain has claimed to fixed the privacy issue (not yet confirmed). http://t.co/XN0XGCxuFv
19:26:34ajweiss:low transition probability?
19:28:56gmaxwell:ajweiss: numbers like 1111000000000000000001111111111111111111110000111100000000001111111
19:42:06nsh:i wonder if it's possible/worthwhile to bitsquat bitcoin addresses
19:43:48nsh:the checksum seems to be concerned with glyph-substitutions rather than bitflips
19:44:52gmaxwell:nsh: I believe I previously created an issue for bitcoin core to post-verify signed transactions against the reencoded input precisely due to that concern.
19:45:10nsh:hmm
19:52:11gmaxwell:e.g. take your signed txn, and reencode the addresses out of it. Verify the addresses and values against the inputs as far back up the stack as you can.
19:54:39nsh:* nsh nods
19:55:40ajweiss:interesting... it's a technique used for efficient testing of digital circuits...
20:03:57tacotime:deanonymizing sharedcoin tx is kind of like shooting fish in a barrel
20:04:34catlasshrugged:tacotime: how recently did you look at it?
20:05:18tacotime:months ago, so maybe it's improved since then
20:05:35catlasshrugged:it has *changed* since then, I can't speak to whether it's improved
20:05:55tacotime:the problem with all centralized mixing services is that they could care less as to whether proper mixing is occurring so long as it simply appears to be occurring to the end user
20:06:07tacotime:as long as people are using it, they get their 1-3% fee or whatever
20:06:16catlasshrugged:tru dat
20:21:22Dizzle__:Dizzle__ is now known as Dizzle
20:54:20DougieBot5000:faraka: WRT zero-knowledge merkle chain, in theory a zk_SNARK constructed with the rules for validation of your chain could be used to verify that there exists a valid chain satisfying those properties
20:55:07DougieBot5000:it may not be practical though, as I don't think zk-SNARKS are very efficient
20:55:16DougieBot5000:yet?
20:57:45gmaxwell:11:00 < faraka> let's say i have a merkle chain of n items, is it possible to create a zero knowledge proof of the existence of a correct chain between hash 1 to n?
20:57:50gmaxwell:what does "correct chain" mean?
20:58:14gmaxwell:If correct means "anything at all" then sure. Your proof is return true; :)
20:58:30DougieBot5000:I just took it to mean "satisfying some validation criterion"
20:59:46DougieBot5000:gmaxwell: aside from the obv implementation and practical issues with something like a zk-SNARK, is there any reason one could not be used to bootstrap clients for the initial chain download?
21:00:36DougieBot5000:either use a proof that X number of headers from the genesis are correct (the proof generator would need to download and verify them) or by directly specifying the UTXO set as an output
21:01:31DougieBot5000:in the first case, it might save some verification and lookups, but the clinet would still need to generate the UTXO set itself
21:01:53DougieBot5000:in the second case, it should be good to go (except for blocks newer than the proof generation time)
21:02:16DougieBot5000:am i missing something obvious?
21:04:33gmaxwell:The first case doesn't save much, but can be used to avoid some dos attacks. (e.g. wasting your time fetching a chain that isn't really best). We give a log-scaling snarkless ZKP for this in the sidechains whitepaper.
21:04:51gmaxwell:As far as the second, been suggested many times before, it's just infeasble currently.
21:05:36gmaxwell:State of the art ZKP performance (which has only 80 bit security and requires trusted setup) has the prover evaluate its code with speed ~= 10Hz.
21:05:58DougieBot5000:Do you get any speedups by removing the need for zero-knowledge from the SNARK? Most of the papers i find on SNARKS are the ZK variety
21:06:07DougieBot5000:yeah, the trusted setup is a big sticking point
21:06:37gmaxwell:No. ZK is almost a "for free" side-effect of the proof being sublinear in the size of the execution transcript.
21:06:42DougieBot5000:i imagine though that simply having someone generate a proof only once a month or longer would be sufficient and amoritze the large proof generation cost somewhat
21:06:55DougieBot5000:well, amoritize is the wrong word there
21:07:02DougieBot5000:i see
21:07:30DougieBot5000:hmm, at 10HZ though, even a fraction of the chain would take forever to validate
21:07:34gmaxwell:(to put the 10Hz into context, state of the art ecdsa verification takes 183k cycles on x86_64 and x86_64 cycles are more powerful than the proof system cycles)
21:07:53gmaxwell:(though there are better ways to perform that particular operation, it's stupidly slow in any case)
21:08:53gmaxwell:DougieBot5000: yes, we could afford _insane_ proof costs, since we only need to do one (or a few; due to trusted setup) proofs for the whole world. But insane has limits.
21:09:04DougieBot5000:i see. Perhaps when we have 20+ years of chain history and better SNARK implementations, it may be feasible to roll some chunk of that into a snark proof
21:09:44gmaxwell:DougieBot5000: Yes, I think it's likely. There is nothing fundimental preventing this from being acceptably fast.
21:10:15DougieBot5000:What are the verification times like for the 80 bit state-of-the-art you mentioned?
21:10:54DougieBot5000:I seem to remember it being either constant time, or some small polynomial related to circuit size or something?
21:11:33gmaxwell:on the order of 10ms. So the system with has state of the art prover performance/scaling is slightly slower to verify because it must use an insanely constrained set of cryptographic parameters that make the verifier a bit slower.
21:12:09DougieBot5000:Thats not bad at all
21:12:28DougieBot5000:well, thanks for answering my questions gmaxwell, dont let me waste any more of your time
21:12:35DougieBot5000:a pleasure, as always
21:12:50gmaxwell:DougieBot5000: most of the things you've seen people write about are all based on the same underlying cryptosystem (GGPR'12), and have more or less the same benefits and weaknesses (super fast to verify, tractable to prove for small statements, trusted setup)
21:13:15DougieBot5000:any work on removing the trusted setup component?
21:13:58DougieBot5000:I try to keep up, but that Eli Ben-Sasson just keeps cranking out papers on it
21:14:48phantomcircuit:gmaxwell, everytime i think i've come up with something novel i realize it's either already been designed or is only slightly different
21:14:48phantomcircuit:heh
21:15:04DougieBot5000:yeah, same here
21:15:27DougieBot5000:i remember coming up with a blockchain compression idea a year or two ago
21:15:43DougieBot5000:not only was it not new, it was worse that what everyone else had come up with years before that
21:22:21gmaxwell:Better than coming up with things that are so stupid no one has mentioned them at all.
21:25:58phantomcircuit:gmaxwell, :)
21:26:22zooko:Yeah. ☺ I know I'm on the right track when I'm inventing things that better thinkers have already invented, studied, and superceded.
21:27:28ajweiss:"you know, for kids!"
21:46:52Dizzle__:Dizzle__ is now known as Dizzle
23:07:09phantomcircuit:interesting observation, if a transaction has equal sized outputs coin selection picks the lowest index number
23:07:14phantomcircuit:possibly that should be randomized
23:08:08NewLiberty:NewLiberty is now known as NewLiberty-afk
23:09:12phantomcircuit:case in point https://blockchain.info/tx/14f2680565ba651d89247e59befeae4c9ef5f140bc589acf059655e6c3bd75ff
23:14:50gmaxwell:hm? it does?
23:16:07phantomcircuit:gmaxwell, appears to
23:16:08gmaxwell:if you would have asked I would have said I thought we randomly shuffled the inputs first.
23:16:11phantomcircuit:oh actually
23:16:16phantomcircuit:i wonder if im doing this to myself
23:16:25phantomcircuit:yes i am foot gunning
23:16:27phantomcircuit:nvm
23:29:06faraka:does anyone have a copy of the hop whitepaper by cunicula?
23:30:44gmaxwell:op_mul: Oh hey, I think I may know why that crazy nonce reuser reuses nonces. Maybe they use a single random nonce per transaction. Doing so would make the signing for the second and later intputs about 100x faster.
23:31:20gmaxwell:op_mul: so if they're super slow HSM or something they might have decided this suicidal sounding optimization was a good idea and done it intentionally.
23:47:31NewLiberty-afk:NewLiberty-afk is now known as NewLiberty