00:03:40maaku:rusty: rustyis this prevsteps?
00:08:10rusty:maaku: yeah..
00:11:52maaku:huffman outperforms rfc6962?
00:17:45maaku:rusty: btw to make absolutely sure we're comparing apples to apples, we might want to pregenerate the lucky values for each block header
00:19:15smk_:smk_ is now known as smk
00:20:32maaku:if my reading of the code is right, it's possible for two methods to take different paths, which result in different block header histories
00:23:19belcher_:belcher_ is now known as belcher
00:28:11rusty:maaku: ok, stepping back. I tried adding a "cache", ie. some blocks duplicated in left branch, mmr tree in right branch.
00:28:27rusty:maaku: first attempt was simply the N "luckiest" blocks.
00:28:46rusty:maaku: then played with different topologies of that cache tree. The winner was a huffman tree.
00:29:28rusty:maaku: then I realized that almost always, the best path is v. similar to the previous best path, so instead of a cache, I just used the previous CSPV path back to genesis.
00:29:51rusty:maaku: ... using the same MMR topology for that "cache tree".
00:30:07rusty:maaku: Now, that *is* deterministic, and *is* incremental.
00:30:30maaku:right path to genesis is optimal for the cache, since you're using it for large skips back
00:31:31maaku:so you're storing path to gensis on the left branch, and mmr of all blocks on the right branch
00:33:24rusty:maaku: yeah, which means some blocks are stored twice, but probably not worth optimizing.
00:33:51maaku:yeah certainly not. space is cheap in a hash tree structure
00:33:58maaku:nice results. i will be afk for a few hours (catching sleep before my kids wake up, early am here)
00:34:06rusty:maaku: exactly. Sure... thanks!
00:37:14maaku:and besides, path to genesis isn't 'extra' data to keep around -- it's presumed available in many cases
00:37:54maaku:because you need to show path to genesis to demonstrate connectivity and aggregate work
02:06:34Guest94906:Guest94906 is now known as jaekwon
02:13:39smk:smk has left #bitcoin-wizards
03:23:31s1w:s1w is now known as Guest89840
03:29:42Guest89840:Guest89840 is now known as SomeoneWeird
04:11:51maaku:rusty: *optimal* path to genesis is not incremental
04:12:07maaku:although we could of course come up with an incremental rule that gives good paths
04:14:07maaku:because a new lucky block could overshoot one of the intermediate headers and it might be fewer hashes to take one of the intermediate headers instead
04:14:22maaku:*take one of the elided headers
04:24:02rusty:maaku: confused...
04:25:01maaku:rusty: okay imagine path to genesis is 10000 -> 100 -> 0
04:25:22rusty:maaku: yep.
04:25:30maaku:If I get a lucky block that skips all the way back to 1, the optimal path is X -> 1 -> 0
04:25:37rusty:maaku: yep.
04:26:08rusty:maaku: but for block 10001, we use optimal path for block 10000. We don't know the optimal path for this block until we solved it, of course, when it's too late...
04:27:17maaku:but the problem is block 1 isn't in the cache
04:27:19rusty:maaku: turns out, the optimal path very rarely changes, so it's a good guess as to what we'll need.
04:27:26rusty:maaku: that's why it's in the mmr tree.
04:27:40rusty:maaku: which is why we need both.
04:27:50rusty:ie. left node is cache, right node is mmr tree.
04:28:02maaku:rusty: no, two issues : (1) the contents of the block might not be known to the validator
04:28:11maaku:(2) the entire mmr tree might not be known to the validator
04:28:27maaku:the validator might only know the peaks of the mmr, and the list of cached headers (path to genesis)
04:28:47rusty:maaku: this depends on the definition of "incremental", I guess.
04:29:23rusty:maaku: you can't validate this unless you know all the headers, it's true.
04:29:56maaku:ok by incremental I mean that it is updatable or validatable without knowing all the headers or block contents
04:30:36maaku:it is incremental in the above example if you store 10001 -> 100 -> 0
04:30:50rusty:maaku: so, is mmr not incremental?
04:31:06maaku:mmr is incremental, you just store the peaks
04:32:42maaku:but that's the point -- if you assume the validator only has the peaks handy, it can't pull out other headers to use in the CSPV proof cache
04:33:58rusty:maaku: only knowing the peaks for block N may be sufficient to check that the hash in block N+1, but it's not enough to use it for a CSPV hash of course. At least, if you're trying to prove back to block M you have to know M...N.
04:35:02rusty:maaku: I'm assuming your point is that there's value in being able to verify?
04:39:41rusty:maaku: hmm, my original cache simply stored the N "luckiest" blocks, which is incrementally verifiable without knowing the rest of the blocks.
04:42:45maaku:that could work, or you can store the most recent block in a lucky range
04:43:32maaku:e.g. the most recent block header with 2^x <= luck < 2^x+1
04:47:13rusty:maaku: the reason storing the old best path works so well is that new lucky blocks (statistically) *always* converge with it. ie. if it was 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0, the 10001 block which could reach back just past 4000 will *always* reuse that 4000 3000 2000 1000 0 part of the path. It never forges a new 3500 2500 1500 500 0 path.
04:48:26rusty:maaku: and that's why this cache works so well, because 4000 is a short proof. My previous attempts to pick winners failed badly.
04:48:34maaku:rusty: right, it will be close to optimal. but not actually optimal
04:51:22rusty:maaku: well, my measurements put the incremental approach at 700 hashes, vs 430. That's pretty big.
04:52:46rusty:maaku: I think "luckier than the 32 I have" can be refined (eg. consider overlaps?), but not that much.
04:56:15rusty:maaku: hmm, we could insist it be better or equal to the previous path (plus 1). That makes it verifiable, but not calculable, and you really didn't want that. I'll have to think some more...
04:58:52rusty:* rusty needs more coffee...
05:10:17rusty:maaku: OK, what if we trim the cache when prev was lucky, otherwise add. No changes allowed. Off the top of my head, that seems sane.
05:10:52maaku:i'm not sure "the luckiest N" is really optimal
05:11:11rusty:maaku: I'm sure it's not!
05:11:20maaku:because after significant time those end up being very distant in the chain and therefore rarely used
05:12:10maaku:and e.g. taking the most recent lucky block of a specific interval of luckyness ensures that some recent low-luck blocks are produced
05:12:17maaku:*included
05:12:49rusty:maaku: yes, but you might miss a big win, and in the long run, the CSPV path is all about big wins.
05:14:28rusty:maaku: so I think we can build the (almost) optimal path cache incrementally. Figure out which of the cache we can reach with the previous block, throw away the rest. If we can't reach any, add the prev-1 block.
05:15:06rusty:(I don't know how close to optimal that will be, but I can find out)
05:21:13gmaxwell:maaku: the optimal path to genesis can be given by an incremental algorithim. Thats the dynamic programming solution I gave before.
05:23:12gmaxwell:it needs O(n) space. and O(n) computation. Starting at genesis, you ask for each block which reachable block has the lowest cost to genesis? add one to the number and thats your cost to genesis, save that and the backpointer. Then move onto the next block.
05:23:24gmaxwell:It's guarenteed to give the optimal path.
05:24:10maaku:gmaxwell: by optimal do you mean shortest? because we've already established by experiment that shortest is significantly sub-optimal
05:24:26maaku:i gave an example above where optimal involves pulling in other paths
05:24:32gmaxwell:it has the total fewest number of bytes.
05:28:18maaku:well as i showed above, that's not always the case. if the current best path is 10000 -> 100 -> 0, and 10001 is lucky enough to skip to block 1, that will absolutely result in a smaller, fewer bytes proof
05:28:28maaku:but, that's not incremental
05:29:09gmaxwell:maaku: sorry, we're obviously not communicating. If you're at 10001 and you can jump to 1 you do. Whats the problem?
05:29:26rusty:gmaxwell: here, "incremental" also means "doesn't know all N blocks"
05:29:41maaku:gmaxwell: i'm talking about a someone who doesn't retain the full block history
05:30:02maaku:*full header history
05:30:11rusty:gmaxwell: really, I think maaku is trying to sharpen my brain by creating new barriers for me to overcome :)
05:30:22gmaxwell:maaku: Ah, I wasn't aware of that assumption. As I said: optimial solution requirest O(N) storage.
05:30:34gmaxwell:er requires*
05:30:38gmaxwell:gah can't spell.
05:30:46maaku:e.g. mmr has the property that you only have to remember log(N) hashes to validate block N+1
05:31:25maaku:so strictly speaking, committing to the shortest path to genesis as well (which rusty shows adds a 50% improvement), requires dropping that property
05:31:52gmaxwell:(well in the worst case the optimal prover must have N storage or do quadratic computation, though on average you can get a little savings by forgetting dominated paths)
05:33:01gmaxwell:Why would you commit to it?
05:33:46gmaxwell:the optimal path at block X depends gratly on block X's hash. It'll change based on how far back X can reach.
05:34:45maaku:gmaxwell: he is Huffman encoding a hash tree to the headers on the path back to genesis
05:35:02maaku:so those can be reached more quickly than descending into the commit-to-all-blocks tree
05:36:50gmaxwell:okay, that sounds somewhat like the path I was on before you'd bludgeoned me into the commit to all blocks line of thinking.
05:37:14gmaxwell:In any case, the DP solution can be adapted to any finite amount of storage, though the result is no loger optimal.
05:39:11gmaxwell:e.g. you have a finite memory, and when you go to add a new block you forget the highest cost to genesis oldest block in your memory. Often that block will be dominated (e.g. there is a later block that has lower cost) and so it won't hurt the solution quality at all, but not always.
05:40:03rusty:gmaxwell: hmm, actually, I have an algo which seems to perform well with (probabalistically) log(N). Assuming I haven't completely messed up...
05:40:57gmaxwell:I'd expect that having any state at all with dramatically improve your solution quality.
05:41:11rusty:gmaxwell: if block N can reach a block in N-1's optimal path, do so. Otherwise, append N-1 to the path.
05:41:56rusty:maaku: that's also incremental, in your sense, if we're recording N-1's optimal path in the tree as a cache.
05:42:05maaku:gmaxwell: right well committing to just the path to genesis wouldn't work, but this hybrid approach seems to be performing under experiment
05:42:14maaku:rusty is checking more than just path to genesis iirc
05:42:38rusty:maaku: ... well, I *can*, but I wasn't. Let me do that now...
05:42:48maaku:ah that might change things
05:43:17maaku:my intuition is that to skip far back, you get on the path to genesis, then hop off and use the other side of the tree to get where you want to go
05:43:51gmaxwell:rusty: right but lets say N gets super lucky and jumps to 100 which jumps to 50 which jumps to 25 which jumps to 1. You're even luckier and can jump to 24 which jumps to 1. But thats not on N's path, so you jump to N instead of 25.
05:44:01maaku:so long as you have access to both, you don't get in the pathological bad cases that worried me about optimizing for path to genesis
05:46:08gmaxwell:rusty: one solution I did before was "when prepping my commitment, I figure out what my optimal path if I can go 1 back, 2 back 3 back.. etc. and I merge duplicates and only commit to the unique values" the results to genesis were good, but the result to other blocks were often very poor.
05:46:48rusty:gmaxwell: I have yet to find a case where that actually happens. I'll keep testing, because it *should*. But it's rare because you need two lucky blocks (24 and 25) close together, then a solution which can't use 24 (since my code will use the furthest block it can), *then* another which beats the first solution which can use it.
05:47:01Pan0ram1x:Pan0ram1x is now known as Guest20526
05:47:14rusty:gmaxwell: (yet to find == I ran 20 times and eyeballed the SPV lengths).
05:47:37maaku:rusty: one thing I wanted to do is modify the code to calculate the path to each block back, or random samplings thereof and do some curve fitting
05:48:03maaku:or maybe random to/from pairings until you get some reasonable statistical convergence
05:48:20rusty:maaku: sure... the code was originally copied from spv.c which is just a simple solver, and does that.
06:49:58luke-jr_:luke-jr_ is now known as Luke-Jr
06:53:45atgreen:jgarzik: moxie ldo/sto offsets are now 16 bits, resulting in much more compact code. Toolchain is updated. I'll send you the moxiebox patch in the AM.
07:02:00gmaxwell:\O/
07:02:45gmaxwell:atgreen: do you know if anyone has looked into LLVM support for moxie? It would be interesting to target moxie from rust.
09:05:16wolfe.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
09:05:16wolfe.freenode.net:Users on #bitcoin-wizards: andy-logbot Guyver2 adam3us Cory execut3 iddo Luke-Jr Shiftos Guest20526 Aquent eslbaer_ devrandom SomeoneWeird d1ggy_ gnusha hashtagg bit2017 coiner ryanxcharles koshii tacotime jaekwon rusty op_mul sadgit brand0 Starduster copumpkin e1782d11df4c9914 c0rw1n mortale Greed MoALTz_ NikolaiToryzin eric Tjopper roasbeef Adlai mr_burdell lclc prodatalab_ SDCDev hktud0 atgreen GAit Krellan digitalmagus8 HaltingState fanquake luny null_radix berndj
09:05:16wolfe.freenode.net:Users on #bitcoin-wizards: dgenr8 ebfull OneFixt jgarzik @ChanServ waxwing nuke1989 BrainOverfl0w phedny Keefe so crescendo throughnothing Taek burcin livegnik sipa harrigan sneak yoleaux azariah kinlo Guest38445 HM2 Fistful_of_Coins warptangent lechuga_ andytoshi pigeons Nightwolf eordano btcdrak comboy a5m0 K1773R asoltys_ JonTitor Alanius mmozeiko smooth ryan-c TD-Linux catcow danneu starsoccer EasyAt DoctorBTC bbrittain heath toddf dansmith_btc stonecoldpat wumpus
09:05:17wolfe.freenode.net:Users on #bitcoin-wizards: btc__ CryptOprah gwillen AdrianG hguux_ nanotube Eliel Anduck cfields BlueMatt coutts BigBitz coryfields nsh Guest2104 kumavis iambernie artifexd amiller BananaLotus [d__d] huseby rfreeman_w bobke Graftec wizkid057 maaku MRL-Relay gribble kanzure Baz__ Muis mappum michagogo [\\\] isis morcos optimator Graet phantomcircuit pi07r mkarrer jchp LarsLarsen fenn warren SubCreative gavinandresen sl01_ ahmed_ lnovy Logicwax nickler butters thrasher`
09:05:17wolfe.freenode.net:Users on #bitcoin-wizards: epscy hollandais fluffypony Iriez espes__ spinza midnightmagic v3Rve helo DougieBot5000_ samson_ Emcy cluckj tromp PaulCapestany wiz tromp__ harrow` Meeh gmaxwell Apocalyptic petertodd poggy_ jbenet jaromil
09:53:39op_mul:"According to our research we should be able to store Private Keys in the DNA within 14-16 Months at the beginning 2016. This would make passports, credit-cards, driver license obsolete. All one than needs to do is to take a [swab] with a simple machine."
09:56:43gmaxwell:derp
09:56:57gmaxwell:talk about sidechannel leakage. Don't breath too much.
09:57:08op_mul:would need more than a wizard for a machine to modify someone's DNA to store a passport. seems like a good deal though. invite someone over for beer, obtain DNA from glass, you end up with all their money and a copy of their passport for AML.
09:57:27adam3us:retarded, as are most biometric systems. these people rarely understand public key crypto
09:58:35gmaxwell:"My private key gave me cancer" "sorry, I got a virus that overwrote my keys"
09:58:55op_mul:even without touching-things sidechannel attacks, fingerprint based private keys would be low entropy *and* impossible to reproduce. fingerprint scanners are totally just fuzzy matching.
09:59:05wumpus:would be interesting, if they also used DNA to do the private key computation, need a special security storage for private keys in your DNA
09:59:49op_mul:wumpus: only way I see that being possible is if you genetically modify your children.
10:00:00adam3us:reader can copy, or interpolate enough to answer more queries from a few results; some biometrics are detachable (fingers!) and the liveness tests are pretty crappy. failing which there is always kidnap. you actually want a soft-failure on id theft - take card and pin, thats better than biometric false positive side-effect
10:00:22wumpus:op_mul: small price to pay, right
10:01:13gmaxwell:I'm sure it would be would be "sci-fi possible" for your body to create a computer like device that sat under your finger tips, stored a private key securely internally, and could communicate via light with the outside world... like an embedded smartcard but made of meat. and such an upgrade could, presumably be directed by modification for your DNA... but I assume this isn't what they're thinkin
10:01:19gmaxwell:g about. :P
10:01:20wumpus:op_mul: anyhow as any cell has a copy, storing data in their DNA is kind of wasteful (but very redundant?)
10:01:20adam3us:op_mul: its a common argument that they've somehow magically created a secure public key system composed of sampling some points as a result of a challenge. however i doubt its secure beyond a few samples, and you have to trust the reader not to do a full sample as during the setup phase.
10:02:12op_mul:wumpus: pets might work too. here's my parrot-wallet.
10:02:17adam3us:gmaxwell: even if its possible i think its actually undesirable. you want a soft-fail where you just give the man with the gun your card & pin. otherwise things escalate from there.
10:02:30wumpus:gmaxwell: I don't think so either, unfortunately :)
10:02:32gmaxwell:op_mul: they already put RFID transponders in pets. You could do that one _today_.
10:02:53gmaxwell:adam3us: but some guy chopping off your auth-finger makes for a much better movie plot!
10:03:25op_mul:adam3us: I'm thinking for fingerprints there's not enough variation. once you do some filtering on a fingerprint, they must all be pretty the same. at least nowhere near 2**256 combinations.
10:03:47adam3us:the biometrics guys would say "liveness test" but all of their stuff is weak and spoofable and ultimately fails via kidnap / blackmail which are both less pleasant than soft-fail: hand over pin & card.
10:04:46adam3us:op_mul: but other than "trust the reader" problem (and you shouldnt do that) - if there is an enrolment scan it can be repeated by a hostile reader - people leave fingerprints everywheer
10:05:14op_mul:a common comment I see about Bitcoin and sybil attacks is that we should do some sort of DNA based anti sybil. it invariably fails when it comes to proving the DNA just isn't /dev/random, and that you can't prove you own it because their solution to double spending was storing everybodies DNA in the block chain :P
10:05:41adam3us:its also identity based. who wants to put their identity everywhere. you cant MAC-tumble a fingerprint.
10:06:18op_mul:would suck if, as gmaxwell said, you based all your money on a cancerous blood cell.
10:06:23gmaxwell:At some datacenters (e.g. equinix is an example) they use some annoying hand shape biometric thing. Most techs that frequent these places have figured out that if, when you enroll, you put in only three fingers (like someone sawed off your ring and pinky), it'll enroll fine, and the reader never throws false negatives anymore, and even better: everyones can match the hand pritn of anyone whos don
10:06:29gmaxwell:e that, so you can share access cards.
10:06:58op_mul::<
10:07:48adam3us:yup the biometric folks not only fail misunderstanding of public key crypto concept, they fail the 5-year-old adversarial thinking test
10:07:58gmaxwell:"oh youre that guy with the 6 sigma handprint, I reconize you!"
10:08:07wumpus:hehe
10:13:46adam3us:hmm what the biometric guys are not doing - so if you trust the reader (dumb idea when its under the control of the attacker) - what you could do is fuzzy-hash it, to get a private key, sign a challenge response with ecdsa etc, and have the verifier store the ecdsa public key. that'd be immune to interpolation.
10:14:15gmaxwell:adam3us: there are at least academics doing that, though the fuzzy hash needs side information.
10:15:12adam3us:i am pretty sure none of the deployed systems are doing that. they're server-side and diy thinkers - so they'll have made a "signature scheme" made from sampling challenged subsets, which no doubt fails under a few samples to interpolation or multiple challenges (grind to find a challenge you can answer)
10:15:56adam3us:gmaxwell: side-info like steering with some guidance from the server that has the private key?
10:16:44gmaxwell:there is some data (I think it's public) that helps the fuzzy hash reliably find the same secret. Created as a side effect of pubkey generation.
10:17:50gmaxwell:I spent a while looking into fuzzy hashes for the idea of using them for "brainwallet" like usage, e.g. where you get asked to provide N passwords and they must be only somewhat accurate.
10:25:10op_mul:gmaxwell: given how little entropy brainwallets have.. you want to make them worse? sounds like joric's idea not yours.
10:27:16michagogo:12:06:23 At some datacenters (e.g. equinix is an example) they use some annoying hand shape biometric thing. <-- sounds like the fast passport control system at Ben Gurion airport
10:27:43michagogo:It's based on "the geometry of the back of the hand", iirc
10:28:33adam3us:op_mul: well it could be a net-win because you could ask the user more information that they'd likely start to forget the specifics of. so with the right params that could be a net win (fuzzy hash of more passwords or q/a type things)
10:28:54op_mul:a lot of those sort of things are probably gimped by the users not wanting false negatives and not wanting it to be treacle slow. combine that with a cheap micro the micro, and you've got a consumer product.
10:29:12op_mul:stupid enough.
10:29:16op_mul:stupid english.
10:29:17michagogo:(Though I assume it'll be phased out within the next couple years, when biometric passports are permanently adopted)
10:30:16adam3us:gmaxwell: i did a simplified version of it in a design for guardianedge hard disk encryption product. it uses hash of canonicalized answers to q&a as a trustless backup mechanism.
10:31:24gmaxwell:op_mul: well I was exploring the idea. Not every thing I think about is useful. I spent some time tonight develpoing a proof of the complexity class of problems solvable (ab)using a particular character in a superhero fiction as a computing device.
10:32:33op_mul:* op_mul nods
10:32:38adam3us:trustless backup ie you forget your password but you might still remember your question answers; and you dont want to trust a server to know the actual disk key and send it to you if you answer the questions, so the questions are stored locally and the key derived from the answers.
10:33:31adam3us:you can potentially limit guesses with server assistance, still without handing the server the automatic ability to derive your key. (key derivation with split keys and the server part with some rate limiting)
10:55:10execut3:execut3 is now known as shesek
11:41:17shesek:adam3us, you could also rate limit the guesses by just using key stretching and no 3rd party (for password recovery, even 2-3 weeks of stretch time would make sense)
11:42:56shesek:though, aiming for 2-3 weeks on consumer-grade hardware would probably be reduced to a few days for an attacker with a specialized hardware, but still
11:55:10Pasha:Pasha is now known as Cory
11:57:04gmaxwell:shesek: unfortunately users make mistakes. having to wait a week on hardening would very likely make the backup worthless.
11:58:28shesek:gmaxwell, if its only used for password recovery and meant for extreme cases, I don't think it would be that bad
11:59:32gmaxwell:shesek: there are lots of people who show up on the forums and irc with lost bitcoin-qt or armory wallets where they think they kinda have some idea. There is some guy on reddit who has a tidy business cracking wallets for people.
12:02:04shesek:well, giving those people an option to recover their funds by answering a bunch of questions they're very likely to remember the answer to, at the expense of having to wait a long time to find out if their answers are correct, would probably make the situation better
12:04:22gmaxwell:"Whats your favorite food" "oh crap, there are three different things I could have put there, and two of them have two alternative spellings I might have used." etc.
12:05:01gmaxwell:in any case it's a tricky tradeoff. users forget keys / can't figure out their trapdoor keys with remarkable frequency.
12:05:12gmaxwell:and funds lost to that are no less lost than funds lost to theft.
12:05:56adam3us:shesek: see outsourceable kdf https://bitcointalk.org/index.php?topic=311000.msg3341985#msg3341985
12:07:29adam3us:vitalik strikes again! (stumbled on while looking for above link) https://blog.ethereum.org/2014/10/23/information-theoretic-account-secure-brainwallets/ no citation and i'm pretty sure it was me who told him about that idea and his alternative constructions are inferior. what is it thats so hard about citations!
12:10:17adam3us:(on the key stretching). yes the system i designed used normal key stretching pbkdf2 if i recall on top of the hashed canonicalized answers.
12:12:52Pasha:Pasha is now known as Cory
12:15:11midnightmagic:either it's not on purpose and he has some kind of brain malfunction, or it's on purpose and he's a bullshit artist. either way why does anyone still talk to him?
12:17:30adam3us:midnightmagic: this was a while ago (oct 2013). yeah i kind of learnt my lesson. dont review stuff (or they try to attribute you as an advisor to add credibility), dont feed them ideas or they borrow them (with or without attribution, you lose either way) to polish their alt-coin's reason-for-existence story, dont tell them why their alt-coin is perceived as a scam, they'll tweak the story to make it less obvious
12:18:10adam3us:midnightmagic: dont critique why their system is broken and cant work, they'll tweak it so its less obviously broken, but still broken until you and other reviewers run out of energy (gmaxwell observation)
12:19:06shesek:adam3us, it is an effective way to get security reviews for free, though :-)
12:19:15adam3us:midnightmagic: gmaxwell had a phrase to capture that last effect.. kind of forgot the phrase, something like security by reviewer exhaustion
12:19:22midnightmagic:adam3us: for what it's worth, there's at least one person who notices the regular attempts to erase your name from pages that credit you properly. :( i've wanted to mention that for a while.
12:19:59midnightmagic:we play by different rules. it's a common argumentative tactic, I encounter it almost daily. I encountered it yesterday.
12:20:22midnightmagic:"we" being every human, not to attempt to draw tribal lines
12:20:59shesek:adam3us, that outsourceable kdf schema is pretty interesting. the big difference between the kind of hardware users and attackers would have has always seemed like a big problem for me
12:21:35adam3us:midnightmagic: maybe i did it to myself by calling bullshit on the alt-coin pyramid scam thing. then they like the idea but they dont want to credit me as then i'll be more likely to jump in and attack their scheme. cant have it both ways i guess (when the comment is alt-coin associated) either you get credited and they're doing it to pump their alts credibility or you dont. but vitaliks article other than being on the ethereum blog wasnt
12:22:01shesek:letting 3rd parties operate hardware for that and compete on prices would probably make it very affordable for users, and make it possible to use much more rounds and slow down attackers
12:22:04midnightmagic:adam3us: I've seen it elsewhere too. The bitcoin wiki; wikipedia..
12:22:48shesek:I wouldn't mind paying even a few hundreds dollars per attempt, knowing that an attacker would have to pay that too
12:23:09adam3us:shesek: yes that way you can get a massive kdf with very efficient hardware and give litecoin gpu miners something useful to do with their gpus now the litecoin asics are out (otherwise they'll chase primecoin or x11/x13)
12:23:54shesek:and given that the profit margin would probably get close to zero due to competition, an attacker with his own hardware wouldn't be able to cut his costs much
12:24:28adam3us:shesek: you scale the kdf cost according to the value protected. if it was $1mil maybe $100 per guess would be appropriate. then as long as your password has > log2($1mil/$100) bits of entropy in it you're uneconomical to attack.
12:24:40adam3us:shesek: yes!
12:25:19adam3us:midnightmagic: yeah there was some stuff on wiki i had a slight attempt to fix some of it but the pages are locked and i quickly gave up to argumentative wikipedia editors who werent interested to fix.
12:26:25shesek:adam3us, ah, that's an interesting observation! you could indeed make it so its entirely unprofitable to even attempt brute forcing it
12:27:17shesek:we could even have wallet software adjust the scrypt parameters according to the password strength and amount of funds stored
12:27:32adam3us:shesek: exactly. thats the idea, choose the kdf difficulty according to that. you can also increase the kdf cost over time (by deleting some info) and keep different amounts in different wallets (need different passwords) for spending money vs savings or something.
12:28:08adam3us:shesek: though its not scrypt, its a variant of rivest-wagner's RSA based time-lock puzzle
12:29:19adam3us:shesek: if someone was feeling energetic they could have a go at launching that as a mining/distributed computing thing as a way to earn money with GPUs. you dont have to encourage brain wallets, you could use it on backedup protected encrypted private keys as a last line of defense if your device is stolen or remotely compromised.
12:29:39adam3us:shesek: tho if people are going to use brain-wallets tis is safer than the alternatives
12:32:27shesek:adam3us, maybe I can suggest that to the zennet guys, I bump into them from time to time (they're located here too, in Israel)
12:32:51samson2:samson2 is now known as samson_
12:32:58shesek:though, I'm not entirely sure that zennet isn't yet another vaporware (don't know enough about it/them to know for sure, but I somewhat get that feeling...)
12:39:47adam3us:its a fun project for anyone interested really. could be done as a coordinated service with a small fee more simply than a p2p protocol, with a master-slave server acting as a pool as a first version.
12:40:16gmaxwell:might be the sort of thing that would be fun to unify with the group vanitygen stuff I've talked about before.
12:40:57gmaxwell:e.g. take the highest paying work: kdf, vanitygen, etc.
12:41:07shesek:right. its much smaller in scope than what Zennet is aiming for
12:41:30shesek:but the hard part is finding people who are interested enough to pursue it, who else have the technical knowledge to implement it
12:41:33gmaxwell:yea general computation stuff just has so many crazy problems.
12:41:36shesek:s/else/also
12:42:04adam3us:gmaxwell: i think this could be a useful thing to do because it'll detract from the alt-coin pyramid effect. people have gpus and they want and enjoy doing something with them.
12:42:24gmaxwell:see also the illfated cpushare stuff.
12:43:17gmaxwell:I'd long hoped gpu mining bitcoin would contribut to more general compute for cash stuff, since bitcoin was providing base load that justified gpu farm investment; but it seemed the interest wasn't there.
12:46:18midnightmagic:maintaining gpu farms is exceedingly time-consuming. IMO the larger the farm, the more individuals doing it as a hobby end up eating all their time filling out RMA forms
12:46:45gmaxwell:yea, indeed hm. I think I've _finally_ stopped startling awake thinking I'm hearing a failing gpu fan.
12:47:06midnightmagic:that's why I stopped. and the reliable cards with ecc, or built for titan (for example) were so expensive there's no point in running them.
12:48:15gmaxwell:speaking of mining, there is a not widely circulated special group buy for spondoolies SP20 http://www.spondoolies-tech.com/products/roadstresss-sp20-special-holiday-gb two for $1000 instead of the normal $659/ea.
12:58:17atgreen:gmaxwell: I have a partial LLVM port already, and just because of Rust. Rust only became useful for systems level embedded stuff recently.
12:58:58gmaxwell:atgreen: awesome! and yes, rusts progress on really bare systems is part of why I asked instead of just thinking about it privately.
12:59:41atgreen:I used to work with Graydon Hoare (rust inventor) many years ago. He's an awesome guy.
13:00:48atgreen:so rust has been on my radar before it was rust
13:01:21adam3us:atgreen: me to, he was an intern at ZKS working in the security group.
13:01:48gmaxwell:(Well I worked for Mozilla in the research group; so I was sort of flooded by rust, and am increasingly obligated to use it because they've more or less implemented every single thing I trolled them about.)
13:03:00adam3us:atgreen: that was around 2000. at the time he was using up his free time coding an object oriented graphical OS. unfortunately now he's at stellar of all places. https://www.stellar.org/about/
13:05:05gmaxwell:(Including the semantics for integer overflow that I wanted: http://discuss.rust-lang.org/t/a-tale-of-twos-complement/1062 )
13:05:28atgreen:adam3us: then he joined my group at Cygnus/Red Hat and worked on embedded tools
13:05:40atgreen:adam3us: are you in Toronto?
13:09:37gmaxwell:atgreen: one of the things I was thinking about wrt moxie is if there are arch affordances which make the frequent bounds testing less costly. (or at least ones that are less complex than per object MMU like protection)
13:10:35gmaxwell:(related http://blog.regehr.org/archives/1154)
13:13:59atgreen:I have lots of opcode space for trapping math instructions. But you'd probably have to hand-code their use with __builtin functions.
13:15:12atgreen:the hardware implementation is realtively easy. divide already traps, and I guess I just trap when the carry bit is set.
13:16:20roidster:roidster is now known as Guest6174
13:17:04atgreen:err, overflow bit
13:17:09gmaxwell:well in a language like rust they could just be used (or at least when the proposal is implemented; this was related to aformentioned trolling. Standard rust integer types will be permitted to trap on overflow at runtime, but not required to. On x86 they'd only trap in debug builds.)
13:17:10atgreen:hmm
13:18:26gmaxwell:really hardware bounds checking probably has more pratical impact, but it's not a trivial addition.
13:22:39adam3us:atgreen: no i am in malta these days, i've moved around a bit. i relocated to montreal to work for zks and graydon relocated to montreal for the internship also.
13:26:41atgreen:oh, right - they were in mtl
13:38:35atgreen:jgarzik: https://github.com/jgarzik/moxiebox/pull/11
13:39:22atgreen:you'll need to rebuild all of the tools. This was the last pending ISA breaking change I can think of.
13:47:53jgarzik:atgreen, ok!
13:48:13jgarzik:* jgarzik needs to update the d/l script to do cvs-up/svn-up/git-pull in case of existing dirs.
13:51:55hearn:jgarzik: do you have any experience debugging dns, perchance?
13:55:40jgarzik:hearn, A bit. Depends on which area. I wrote my own DNS server: https://github.com/jgarzik/dvdns Had to debug that.
13:56:06hearn:i'm doing the same. for some reason my server works fine when querying it directly. when doing a regular recursive lookup, my isp resolver gives back SERVFAIL
13:56:21hearn:i suspect a general dns configuration error rather than a bug in the server
13:56:29jgarzik:Was that really 9 years ago? Shit.
13:56:56hearn:of course because it's an error along the recursive path, i can't see any debug logs :(
13:57:04jgarzik:hearn, I assume you are setting the recursive bit
13:57:17hearn:you mean in the response? yes RD bit is copied across
13:57:54jgarzik:hearn, in query?
13:58:12hearn:i'm using dig, so yes. when i use +trace (non recursive) it works
13:58:31jgarzik:hearn, Your server copies the recursive bit into its upstream query?
13:58:56hearn:yes
13:59:05atgreen:jgarzik: I have an update script in the moxie-cores repo you can take
13:59:19jgarzik:OK, good. Anyway, I must pause and run an errand for the wifey. Back in 30 min.
13:59:42atgreen:https://github.com/atgreen/moxie-cores/blob/master/tools/update-tools-sources.sh
15:07:17Pasha:Pasha is now known as Cory
15:49:31naturalog:hi
15:49:49naturalog:shesek: got your email, replied
15:50:12naturalog:shesek: join #zennet
16:25:39shesek:naturalog, yep, got it :)
16:36:14Adlai`:Adlai` is now known as adlai
17:32:38lclc:lclc is now known as lclc_bnc
18:41:53lclc_bnc:lclc_bnc is now known as lclc
19:03:12lclc:lclc is now known as lclc_bnc
19:56:57op_mul:gmaxwell: I'd buy one of those spondoolies boxes for $500 but. that difficulty increase though.
20:12:00op_mul:gmaxwell: also with the comment about general computing, I've thought of setting up "pools" for various CPU heavy tasks before (that I could even have "shares" for), the sticking point is that I know some asshole is going to point a botnet at it, and then people come knocking on my door expecting me to be responsible. then you have to ask for ID or something, and then nobody can contribute.
20:25:49Luke-Jr:op_mul: what difficulty increase? :D
20:27:15op_mul:Luke-Jr: looks like this period will be a go-upper, probably. http://bitcoin.sipa.be/speed-small-lin.png
20:27:56Luke-Jr:dunno
20:28:11Luke-Jr:not by much if so?
20:31:58op_mul:probably 5% this period. I don't think it would be possible for *me* to break even with my power costs, even though they've got a lot lower recently.
21:43:04MRL-Relay:[surae] so, i just finished reading the shadowcash whitepaper, and I'm very confused because it appears to me as if they don't implement any NIZK *anything* and they're just throwing around terminology they don't understand. Am I missing something, or are *they* missing something?
21:46:41op_mul:surae: wouldn't be the first time somebody launched an altcoin with none of the features it claims to have.
21:46:51MRL-Relay:[surae] for sure
21:47:07MRL-Relay:[surae] just wondering if anyone was familiar with it
21:57:29lclc_bnc:lclc_bnc is now known as lclc
22:06:26phantomcircuit:gmaxwell, is there a significant performance penalty for exception handling on overflow?
22:33:00adam3us:op_mul: i think its required that the computing task operates over tor and has bandwidth constrains or per MB charges imposed
22:35:41op_mul:adam3us: I'm not sure how you'd police bandwidth limits. even forgetting a HS, botnet owners have unlimited bandwidth and unlimited IP addresses (unlimited in this context anyway)
22:36:31op_mul:adam3us: and the work I was thinking of was literally just "scan this range for matches, return partial matches as PoW"
22:43:44phantomcircuit:op_mul, suspect the dip and rebound are just noise
22:44:05phantomcircuit:or maybe someones putting hw online but like
22:44:06phantomcircuit:why
22:44:35op_mul:phantomcircuit: only reason I thought it was real is that the timing matches up with Bitmain having stock of their new chips.
22:45:30phantomcircuit:i guss
22:45:39phantomcircuit:it's kind of funny
22:45:51phantomcircuit:when the difficulty dropped all the calculators started showing infinity profit
22:46:48op_mul:I mean, there's nobody else. pretty much everybody is claiming early or mid 2015 for their next chips. asicminer, KNC, spondoolies, bitfury. there's not reall many other big players at this point I don't think.
22:49:00phantomcircuit:everybody is targeting 16nm
22:49:11phantomcircuit:28nm HPC is a ~30% improvement
22:49:20op_mul:be interesting to see if anybody hits the mark.
22:49:26phantomcircuit:except it seems like people actually using it are getting more than that
22:50:35op_mul:yeah. I'd be surprised if anybody hits 0.05W/GHs at any sane production price point
22:51:05phantomcircuit:op_mul, i get a nice laugh at people quoting 0.05J/Gh
22:51:17phantomcircuit:i mean you can do that... for like 10x the capital costs
22:52:05phantomcircuit:wow what the S5 is chained?
22:52:11op_mul:yeah.
22:52:42op_mul:it means they're making bank on the S5. no DC-DC.
22:53:23phantomcircuit:$0.35/Gh @ 0.5W/Gh?
22:53:32phantomcircuit:yeah they're making bank on that
22:54:18op_mul:could be incendiary or genius depending if they got it right
22:54:59cookiemonster:is anyone going to the miami bitcoin hackathon?
22:55:14cookiemonster:cookiemonster is now known as Guest71633
22:55:17phantomcircuit:even then though that's like
22:55:32phantomcircuit:minimum 120 days @ $0.05/kWh
22:55:37op_mul:phantomcircuit: thing is, it's a parallel series design. depending how they did it, you could lose a whole board just due to one bad chip.
22:56:20phantomcircuit:op_mul, really?
22:56:25phantomcircuit:why would you do that
22:56:29op_mul:https://i.imgur.com/lZmXzh2.jpg
22:57:08op_mul:you'd do that to save on level shifting I suppose
22:59:09phantomcircuit:op_mul, yeah i guess
23:00:06op_mul:they've cut out most of their costs, so going even further probably makes sense. there's almost no cost in these boards. low current so you can have almost no copper on the board, no expensive 0.6v supplies
23:01:51op_mul:don't know why they chose the stupid beaglebone to control them though
23:16:40phantomcircuit:op_mul, i'd guess they bought them for nothing from cointerra
23:17:25op_mul:phantomcircuit: ha, probably.
23:18:32Pasha:Pasha is now known as Cory
23:20:50gmaxwell:I'd rather have beaglebone to an rpi any day.
23:21:29phantomcircuit:op_mul, any idea if these have digital vcc control?
23:22:19op_mul:gmaxwell: for a miner though? it's like using a gold axe to weed your garden. you don't even need a web UI, an app that talks to a simple API would be so much easier.
23:22:34op_mul:phantomcircuit: I've no detail outside of squinting at their photos.
23:23:02phantomcircuit:huh this kind of looks like they have the 12v from the psu directly connected to the chips
23:23:06phantomcircuit:that cant be right
23:23:44op_mul:phantomcircuit: there's a small DCDC on the back I think, you can see the coil on the upper right of the board. 12 > 9v.
23:24:31op_mul:oh ha, the picture of the boards are actually huge. https://i.imgur.com/InVQWW8.jpg
23:24:31lclc:lclc is now known as lclc_bnc
23:30:51op_mul:phantomcircuit: they mention that you can run it straight from 9v. so I suppose in that mode they just leave the buck converter on permanently so it's just effectively a short through the inductor.
23:32:15phantomcircuit:op_mul, yeah but where the hell are you going to get a 9v AC:DC
23:33:01op_mul:phantomcircuit: 4 * 9 in series gets you 36v.
23:34:55op_mul:float three 12v server power supplies and you'd be smiling
23:35:25phantomcircuit:op_mul, now that is a run away failure asking to happen :P
23:38:33op_mul:I don't see how else they expect people to get 9v power supplies.
23:40:52phantomcircuit:op_mul, i dont understand why they didn't just put 33% more chips on there
23:41:31op_mul:phantomcircuit: the regulation on PC power supplies is terrible. I think they're trying to get around that.
23:41:58phantomcircuit:op_mul, hmm maybe
23:42:11phantomcircuit:they could still have the buck there though
23:42:17phantomcircuit:would be a bit cheaper
23:44:04op_mul:I have ones that dip down to 11v or so when you're drawing a lot of power. might have been enough to make the strings unstable? there's room on the top row for one more voltage step, so there's got to be a reason other than space on the board.
23:44:26phantomcircuit:op_mul, also you can get server supplies that are well regulated for ~ the same price (but maybe 5% of them will experience infant mortality)
23:45:19op_mul:xbox 360 power supplies are cheaper again and have better regulation.
23:46:30op_mul:actually maybe not, there's a lot of server power supplies for peanuts on ebay
23:47:27op_mul:guess I'm thinking small scale and not building farms of the things :)
23:49:35phantomcircuit:op_mul, you can buy server psus used on the cheap in bulk
23:49:49phantomcircuit:but you have to build adapter boards to pci-e atx
23:50:13phantomcircuit:(not to mention to clear pmbus errors which can cause the supply to stop, even when you're never going to fix a broken one)