03:57:58op_mul:midnightmagic: for whatever it's worth, I do run conformals crap (non public exposed) locally to compare it with bitcoind. I'm riding on the limits of my memory, but I am running a few versions of bitcoin core and btcd.
03:59:51op_mul:I want to include coinbases insane Toshi full node as well, but I've not the disk space to be able to do comparisons with it just yet.
04:02:36op_mul:I find it *really* scary they they are pushing people to use their full node when it's quite obvious they are not open about their consensus bug/s.
04:03:01lechuga_:re: conformal?
04:03:06op_mul:yes.
04:08:17justanotheruser:Are people surprised that something written in Go uses more memory?
04:09:02justanotheruser:op_mul: it seems they are aware of their consensus bugs, they just want all miners to softfork so they only accept blocks that are valid by all full nodes that are popular
04:11:19op_mul:heh, well. I wonder what triggered that post.
04:13:20kanzure:a thread on bitcointak.org
04:13:24kanzure:*bitcointalk.org
04:13:29kanzure:and conversation in here from the other day
04:14:02kanzure:justanotheruser: their "probabilistically calculate consensus based on which clients fail or don't fail" is just a good way to converge on a set of rules that don't work at all
04:15:40op_mul:kanzure: yep, found it just after I said that. by "detect", do they mean they think miners should be running their own copies of heaps of nodes and comparing submissions?
04:16:44kanzure:i interpreted his scheme as "run multiple nodes, collect their output, then pick the output that the most nodes gave you as the one to relay"
04:16:52kanzure:i'm gonna go cry in a corner now
04:18:20op_mul:it means running hacked up bitcoin nodes to do litmus tests on, otherwise you'd end up with a BIP50 situation where all the *parts* of a block are valid but together it fails on *some* systems.
04:19:19op_mul:"Additionally, there are a myriad of techniques to reduce the burden such as the use of distributed super nodes of various blessed implementations which allow miners to check against without having to incur a lot of extra resource hits themselves."
04:19:41op_mul:ah yes, centralised nodes which can control what you put in a block. that sounds smart.
04:20:25kanzure:the one about "probability" was something like "pick your own set of node versions to run, then whichever result you get the most is the one you should relay to the network", but the problem is that the majority of your nodes can be broken
04:21:28op_mul:if that happens you obviously just weren't running enough of them
04:22:15kanzure:i dunno how much sarcasm you're intending
04:22:27lechuga_:while i dont agree with the alt solution portion i dont think the rest of the post was completely off-base
04:22:32op_mul:thick like butter. it obviously doesn't scale.
04:23:13lechuga_:and it's unfortunate but i think that sentiment is lost due to fixation on the porposed solution
04:23:19lechuga_:proposed*
04:23:23kanzure:the sentiment is wrong and broken
04:24:39kanzure:"the red herring is only focusing on forking risk introduced by alternative implementations [and therefore we should introduce as many potentially broken consensus implementations as possible]"
04:24:43kanzure:come on
04:25:30kanzure:i can only simultaneously maintain so many broken implementations at once
04:25:31op_mul:I think that sort of thought devalues into "well I don't want my blocks to be invalid, I won't include any transactions to have the best chance"
04:25:42justanotheruser:I still don't understand the opposition to a single consensus implementation
04:25:57kanzure:i believe the opposition boils down to "you're a control freak"
04:26:05justanotheruser:the arguments always is "it's centralized"
04:26:14justanotheruser:yeah
04:26:24lechuga_:but the same consensus implementation has been broken wrt a previous version of the same
04:26:30kanzure:so?
04:26:39justanotheruser:I don't see the problem with the centralization though. We have a centralized implementation of the genesis block, a centralized implementation of sha256
04:26:59justanotheruser:I guess sha256 doesn't count since it's part of the consensus
04:27:01lechuga_:there are a myriad of sha256 implementations
04:27:16justanotheruser:lechuga_: there are a myriad of genesis block implementation
04:27:21lechuga_:also where does consensus affecting code begin/end
04:27:29lechuga_:is the line really that clear
04:27:52justanotheruser:clearly not with the 0.7 - 0.8 fork
04:28:27kanzure:lechuga_: just because consensus is difficult does not mean that creating new implementation attempts will fix the problem. if anything, it will increase the testing problem combinatorially.
04:28:29lechuga_:i think theres some truth to what davec wrote and it's handwavy to just mock him
04:28:38kanzure:your argument so far has been lacking, though
04:28:41nubbins`:fwiw pre-0.7 downloads the blockchain just fine
04:28:52kanzure:i am not mocking him, i have presented exact arguments
04:29:37justanotheruser:lechuga_: do you agree with him that miners should have consensus decided by the intersection of multiple consensus rulesets?
04:29:41lechuga_:no
04:29:50lechuga_:i told him to his face i didnt agree with that portion
04:29:51justanotheruser:what do you agree with him on then?
04:29:59lechuga_:well to his irc face
04:30:03justanotheruser:lol
04:30:25lechuga_:i agree with the sentiment i interpreted from the rest of his post
04:30:40lechuga_:which was basically "right now the solution to consensus breakage is to be really careful and hope"
04:30:54kanzure:no that's not the sentiment
04:31:01lechuga_:how did u distill it
04:31:03gmaxwell:lechuga_: on multiple sha256 ... not really, most are just copies of the reference code-- shimmed into different apis, or outright mechnical reproductions, there is basically no degree of freedom in implementing it (beyond a decision to unroll the inner loop or not)... and it's relatively easy to be confident you have it right. And often going out and writing your own, without good cause is caus
04:31:09gmaxwell:e for concern, like with other cryptographic tools. (Not really aruging with you overall, just being a bit pedantic about sha256)
04:31:20kanzure:"You can certainly debate which one has more or less forking risk, but the underlying fact that both carry forking risk is indisputable."
04:31:48kanzure:he even specifically makes that text blue (i can't even imagine why he would choose to use font coloring, sigh)
04:32:25kanzure:anyway, just because there's "forking risk" does not mean that you should create as many possibly forking implementations as possible -_-
04:32:39justanotheruser:what fallacy is that?
04:32:47gmaxwell:lechuga_: Right now the general direction is extensive testing, on top of avoiding changes with consensus risk, on top of trying to keep the operative part of the network running consistent software. It is quite a bit more than hope, and thats basically an insulting way to put it; though I know no insult was intended.
04:32:52lechuga_:gmaxwell: nod all true and point taken
04:32:54kanzure:justanotheruser: i lost my fallacy decoder ring, sorry
04:33:08justanotheruser::(
04:33:21lechuga_:yes that wasnt a fair way to put it on my part and not really my intent
04:34:30gmaxwell:lechuga_: I think a reasonable long term goal is to get the consensus implementation into a state where all systems on the network can run a consensus algorithim which is formally provable to be identical.
04:35:25justanotheruser:I think he does have some truth when he says miners should have multiple consensus implementations validating their block actually. We shouldn't be doubling the number of consensus implementations, but some mining pools may want to test the block on a bunch of different architectures.
04:35:55lechuga_:i think it's relevant to more than just miners
04:36:24gmaxwell:I don't agree that miners should be doing that, generally. (other than perhaps monitoring)
04:36:29lechuga_:it isn't crazy to run a few lagged versions and if u detect differing results start paging people
04:36:34justanotheruser:well I can be confident that if a block is valid on x86_64, it will be in the mainchain
04:37:14justanotheruser:gmaxwell: why not? Unnecessary?
04:37:27gmaxwell:Because what do you do when you find a rejection? soft fork? that puts users who aren't running an ensemble validation at risk. (because the network surprisingly doesn't follow the path they thought it would as the subset of ensembling miners with a particular version in their set diverge the state)
04:37:53kanzure:yes, and your ensemble composition can cause long-term degradation if you keep making decisions like that
04:37:59kanzure:because popularity in your ensemble doesn't mean anything in particular
04:38:51justanotheruser:You would have to softfork I guess. It is safe to guess that a block valid on SPARC and invalid on x86 isn't going to be a profitable block to mine on
04:39:00gmaxwell:And if it were considered a best practice it would be an additional conteralization pressure... since if a majority of miners have wacko-node-x in their ensemble and you don't, you'll be at increased risk... so its a race to the bottom on resource usage.
04:39:09op_mul:lechuga_: I think it's crazy to suggest miners need to be doing it. I'm doing it and it's incredibly resource intensive.
04:39:41justanotheruser:obviously such a fork would need to be reported immediately too
04:39:46op_mul:several hundred dollars worth of hardware just sitting around processing the same block like 6 times in a row.
04:40:24op_mul:justanotheruser: well, it's set up to call me if there's a failure. I can act quickly based on that if the need arises.
04:40:53gmaxwell:But beyond saying they shouldn't, I think we'e also seen that they just won't. I worry about things like miners not validating anything at all. I had pretty rude argument on bct a couple months ago with a pool op who thought it was fine to start mining on a block without validating it.
04:41:12justanotheruser:op_mul: and one of the consensus evaluators may be written in C++ and the other in JS/Ruby (toshi)... it will be slower and more resource intensive by more than a factor of 6
04:41:27op_mul:gmaxwell: discus fish, right? they lost a lot of money to that once.
04:41:39kanzure:discus fish said that?
04:41:41justanotheruser:gmaxwell: how does he create a block without validating it?
04:41:44gmaxwell:fortunately the extreme cost adversion that causes them to think that kind of crap is an okay thing to do (much less talk about in public) also keeps them from lifting a finger to modify the software.
04:41:50justanotheruser:s/a block/work/
04:41:50op_mul:no, discus fish mine on top of the headers of other pools.
04:42:08gmaxwell:op_mul: the argument wasn't actually with discus fish. Though they're known to have done that.
04:42:19op_mul:gmaxwell: insane.
04:42:27gmaxwell:justanotheruser: just ... don't validate. You can always produce another block, especially easy if it has no transactions.
04:42:41lechuga_:do any of them even support gbt yet
04:43:14lechuga_:last i checked the answer was still no
04:43:17justanotheruser:I see, so they are just getting work from another pool?
04:43:17kanzure:gmaxwell: can you double check my "anyway, just because there's "forking risk" does not mean that you should create as many possibly forking implementations as possible" line of reasoning?
04:43:33gmaxwell:lechuga_: a number of pools have, basically anything that runs luke's eloipool software has, since ... didn't have to do anything to make it work... Most of the well known ones have written their own software though, so ... no.
04:44:44gmaxwell:kanzure: I agree with the line of reasoning generally. It's also not reasonable to compare the unlike risk sources. A completely novel implementation has thousands of places to be inconsistent, vs version to version differences which are much smaller.
04:45:15gmaxwell:justanotheruser: they don't have to get work (though discusfish was) ... you can just get a block from someone else, ignore the content, and produce the next header.
04:45:25kanzure:so that other kind sounds like something like internal self-consistency or some sort of mirror/symmetry concept thing that i don't know the name for
04:46:00kanzure:oh right, it's not just same-same version but also same-previous (a)symmetries, hrm
04:46:24justanotheruser:Is the mining pool trusting someone to make a valid block much worse than all the miners trusting the pool to make a valid block
04:46:33kanzure:and same-previous-vs-future blah. i can see why this is not obvious to others, but i can't say that's okay
04:46:40justanotheruser:Either way, all the miners are getting their work from an authority
04:46:50gmaxwell:justanotheruser: the pool at least has a reputation.
04:47:27justanotheruser:Are they getting the blocks from some random node, or someone the pool trusts?
04:48:02gmaxwell:kanzure: the argument also sounds less strong (though perhaps it shouldn't be) when the bogus claim of an existance proof for a version to version incompatiblity.
04:48:12gmaxwell:kanzure: er without the bogus claim*
04:48:26kanzure:oh wait, what's bogus about that claim?
04:49:40gmaxwell:kanzure: we've never had one, at least that we know about! we've only had the self-self inconsistency (all versions prior to 0.8), at least in released software. We've introduced cross version inconsistencies in git and found them though.
04:50:00kanzure:oh
04:51:08gmaxwell:I probably should be more agressive about correcting that claim wrt 0.8. I haven't because the incorrect version of events teaches a reasonable lesson, unless you want to use it as a actual gauge of the risk of that particular failure mode...
04:52:19kanzure:what about all the consensus bugs in bitcoin-ruby or something
04:52:23kanzure:wouldn't that make a good exampe
04:52:27kanzure:*example
04:52:36gmaxwell:Well it makes the opposite example to the one dave wanted there.
04:52:47gmaxwell:It's very easy to show complete reimplementations being inconsistent.
04:53:13gmaxwell:And we can show self-self inconsistencies (which the y
04:53:44gmaxwell:ensemble validation approach makes even riskier unless the ensemble is quite large and the failures have the right behavior)
04:54:23gmaxwell:But we cannot show version to version. The nearest we can do is see where they've existed as patches which got caught. And maybe from there try to extrapolate the risk.
04:55:12kanzure:i am not sure he would be convinced by version-to-version
04:55:19kanzure:i am not sure he would be convinced by version-to-version
04:55:26kanzure:gah, i meant to add +incompatibility at the end
04:55:37gmaxwell:I guess one data point is that virtually no one runs the blocktester locally before submitting pull requests, so we'd have public evidence if people were frequently breaking the consensus with their patches in currently detectable ways.
04:56:25gmaxwell:kanzure: well he claims that 0.8 was a version to version change. Though this isn't true. It's basically arguing that alt_implementation is equivilently dangerious as an arbritary new version.
04:57:25gmaxwell:And I think it's very very easy to show that to be untrue by looking at the history of alt implementations vs new versions of bitcoin.
04:58:16kanzure:btw, for whatever it may be worth, i think pitching directly the verifiable implementation plan is probably quicker rather than attempting to convince people about dangerous incompatibilities (which for whatever reason is non-obvious to people who tend to not be me)
04:58:21gmaxwell:Self-self may be a better point; but self-self risk is, I think, made worse by the ensemble verification approach.
04:58:55gmaxwell:kanzure: I agree. You haven't seen me out telling anyone in public about this, except other implementers. It's just basically impossible to get joe-user to have any notion of any of this.
04:59:56gmaxwell:And so you see bitcoin core working to cordon off the consensus code. We've been expiremnting with moving it into a simple virtual machine too. But of course, we have to do this stuff insanely slowly so that the work itself doesn't create unacceptable consensus risk.
05:00:23kanzure:also: it would be fun to come up with a list of rules that need to be invalidated or removed from consensus that, individually, look benign but really cause disastrous centralization, as another sort of educational tool for implementers to ponder the epistemic consequences of
05:00:27gmaxwell:(Not just pitching the plan, but doing the work. Pitching the plan won't help, because it's not work anyone actually wants to do.)
05:00:38kanzure:s/but really cause/but together really cause
05:01:28kanzure:oh sure, actual work is much harder to find people for
05:01:47gmaxwell:well I think the bitcoin ruby checksig not issue is an elegant point. (also because bitcoinj had made a simmilar but more subtle error something like a year before)
05:02:53gmaxwell:one of the bitcoin ruby consensus issues was that if a CHECKSIG operation failed it threw an exception and invalidated the trasaction. ... But there is nothing technically wrong with a transaction that CHECKSIGs and ignores the result: or even more fun, OP_CHECKSIG OP_NOT e.g. a must fail signature.
05:03:38gmaxwell:Bitcoinj's similar issue was that a signature with length zero (or some other similar kind of bad signature) would throw an exception and thus invalidate the transaction.
05:04:41gmaxwell:In the case of bitcoinj it was even third party code that threw the exception... so unless you went digging inside your system libraries you'd never realize this outcome was possible.
05:07:03gmaxwell:I think there is a belief that these risks come from not knowing what bitcoin core is doing; and thats certantly a source of risks (though I note, I don't believe btcd reported a single unknown behavior in bitcoin core; Matt reported quite a few. Perhaps thats evidence that there is less unknown behavior now)... but you don't need to have any ambiguity as to what bitcoin core is doing to have no
05:07:09gmaxwell:idea what your own software is doing. Thats just the nature of programming today: there is so much abstraction that you really don't know what things are doing.
05:07:36gmaxwell:(see also the youtube link I pointed petertodd to in here recently, with the prank electronics)
05:07:59kanzure:i saw that one, it was cute
05:09:02gwillen:gmaxwell: was this the troll-circuit video, with troll-series and troll-parallel
05:09:12gmaxwell:anyone who's done much RF electronics has expirenced the issue that no component is pure; your caps have non-trivial inductance (as do your wires..) yadda yadda.. Its like that in software too. But most software is building things where these differences don't matter much.
05:09:20gmaxwell:gwillen: yes.
05:09:29gwillen:I love that so much.
05:09:53kanzure:gmaxwell: also this is fun and slightly related http://diyhpl.us/~bryan/papers2/Murphy%20was%20an%20optimist%20-%20rare%20failure%20modes%20in%20Byzantine%20systems%20-%20Kevin%20R.%20Driscoll.pdf
05:10:05gmaxwell:I _almost_ had it right what he was doing in three led one, I figured there was an RF generator, but I thought it was the battery... not the clip!
05:10:29kanzure:me too; i may have seen a battery version in the past, though
05:10:57gmaxwell:well it's a classic gag to build "batteries" that put out high power RF so you can light up bulbs touching only one terminal.
05:12:37gwillen:hah
05:12:49kanzure:"So, when a designer says that the failure can’t happen, this means that it hasn’t been seen in less than 5,000 hours of observation"
05:13:07kanzure:"We cannot rely on our experience-based intuition to determine whether a failure can happen within required probability limits"
05:13:38gwillen:this reminds me of that article where a pilot is complaining about healthcare, and comparing their respective notions of 'rare enough to ignore'
05:13:45kanzure:i should use this line of argtuing more often
05:14:20gwillen:http://www.newstatesman.com/2014/05/how-mistakes-can-save-lives
05:15:13gwillen:he cites 1 in 20,000 anesthesia inductions becoming emergency situations, which is "rare"
05:15:15gmaxwell:did you see the recent "cardio emergency events have lower mortality when the cardio docs are off at convention" paper?
05:15:29gwillen:versus engine failure being like 1 in 1,000,000 or better
05:15:32kanzure:i saw the headline but not the paper
05:15:48gwillen:gmaxwell: yeah, some lol there
05:15:53gwillen:although a lot of potential confounding
05:15:56gmaxwell:kanzure: was what it said on the tin more or less.
05:15:59kanzure:is it possible that they are worse at recording data when they are not there
05:16:09gmaxwell:kanzure: no, they seemed to control for that.
05:16:14kanzure:damn
05:16:24gmaxwell:But they're not controlling for things like less expirenced (job hunting) people going to the conference.
05:16:38gwillen:they tested something like 30-day outcomes for patients admitted while the conference was happening
05:17:04rusty:gmaxwell: Heh, I'd assume the opposite, since many conferences are actually junkets.
05:17:23gwillen:gmaxwell: the issue that seems most fundamental to me is "what if interventions cause people to die now, whereas conservative management causes them to die later"
05:17:33gwillen:so the 30-day endpoint rewards doing nothing because the experts are all gone
05:17:35gmaxwell:rusty: often seems to be two groups, junket takers and people looking for jobs.
05:18:13gmaxwell:gwillen: but they also broke down by type, that seemed unlikely for the people going in with cardiac arrest!
05:18:24gmaxwell:(where doing nothing at all means they almost certantly will die)
05:18:46gmaxwell:but yes, looking at two years as well would be interesting.
05:18:47rusty:gmaxwell: Well, junket takers increase with seniority, if conferences are seen as a perk.
05:26:02gmaxwell:kanzure: in any case, all is not lost. I've found and fixed bugs that from a random chance failure perspective should have looked like 10^-18 events. For example, in some cases we're able to just exhaust inputs to test things... running 2^32 tests is no big deal in many cases now.
05:27:10gmaxwell:Though I do kind of fear that software may well remain an orgy of failure in most domains because the stakes just don't justify the extreme cost of making it reliable... not unless we get pratical quantum computers, in which case a lot more testing would be tractable.
05:28:41op_mul:has anybody ever developed a fuzzer for bitcoin script?
05:28:49op_mul:could be mildly interesting, I think.
05:30:17op_mul:I know andytoshi did some static analysis a while back
05:30:24gmaxwell:Laughing at the retest OK in that slide deck. In a prior life at a network equipment vendor, I had encountered an OC192 interface with a hosed flipflop that would randomly flip one bit about 4000 bytes into any packet that went through it. ISIS (common internal routing protocol in service providers) uses padding in its heartbeats so padd upto the maximum link size, and also uses md5 auth.
05:30:40gmaxwell:op_mul: Nothing anyone has released in a turn key way. It would be easier to do now that libconsensus exists.
05:31:06gmaxwell:You just need to have libconsensus shimmed up to take input from stdin and then AFL works.
05:31:38gmaxwell:I wrote one for that but haven't done much with it because I've been spending most of my testing cycles on libsecp256k1 and 0.10rc1.
05:31:49op_mul:I was thinking you could go more specific and actually do runs on bitcoin core and btcd at the same time
05:32:10op_mul:not sure how plausible that is though
05:33:10gmaxwell:(cont interface story) so when a customer I worked with hit that bad interface, I figured out the actual problem, but looked up the history on the part, it had been returned 5 times by other operators, no-trouble-found, and sent back out in the field... just because basically no traffic uses large packets normally, and the repair shop tests didn't.
05:33:42gmaxwell:op_mul: requires a lot of exposed state to do that usefully.
05:36:14gmaxwell:If that card had made its way into a network that didn't use ISIS it probably would have stayed there undetected for years, happily corrupting any large packets that went through it.
05:37:02op_mul:when you say flip flop, are you talking a literal, discrete IC?
05:38:15op_mul:sort of baffled as to how you would have narrowed it down to that if it was part of a larger component
05:40:02phantomcircuit:op_mul, he's probably talking about a flipflop on an asic
05:40:12phantomcircuit:which much have been hilariously "fun" to find
05:40:39gmaxwell:op_mul: there was a wide parallel two packet buffer right on the inside of the phy (with a buffer external to it full of flipflops), virtually everything else in our hardware had per _link_ CRC protection (e.g. between each component). so after reproduction it was not so hard to narrow down where it was.
05:41:50phantomcircuit:that sounds nice heh
05:42:14op_mul:gmaxwell: ah.
05:43:09gmaxwell:phantomcircuit: nice things about working on N-th generation designs, the earlier hardware didn't have the extensive error detection.
05:45:17op_mul:see this is where your knowledge actually comes in handy. where on earth can I reuse experience with DMX based dimmers.
05:46:10gmaxwell:which was just awful in the field "this million dollar router isn't working right" "uhh. lets swap parts until it starts working right." (doubly so in devices that internally had distributed designs, so there were failure modes where the part that needed to be replaced seemed to have no relation to the things it was effecting: e.g. to avoid memory bandwidth bottlenecks incoming packets were celli
05:46:16gmaxwell:fied and sprayed across all linecards)
05:47:04gmaxwell:op_mul: I dunno work with DMX taught me lots about the relative importance of termination.
05:49:57op_mul:I got to watch people learn over and over again that plugging kilowatts of lights into a single dimmer output will cause it to ignite.
05:53:10gmaxwell:"back to the old variacs for you!"
05:54:44op_mul:pretty sure this dimmer was built in the early 80s and had about the technical sophistication of a toaster. when you ramped the channels up you could hear all the coins in it screaming and protesting.
05:55:06op_mul:s/coins/coils
05:57:10gmaxwell:op_mul: usually the dimmers are just big SCRs. produce so much emi that everything not bolted down vibrates a bit. :P
05:58:45op_mul:gmaxwell: sounds about right. in this case somebody decided to design the building so that the dimmer board was at the top of a ladder above the stage floor. you could only work on it with one hand, because the other would have to be holding the ladder.
05:59:59op_mul:as far as I can remember it had no fuses, no breakers, if anybody went too crazy with the double adaptors things would just start melting. at one point I had to stop someone from plugging an electric floor heater into one of the sockets.
06:00:03Keefe_:Keefe_ is now known as Keefe
06:01:08gmaxwell:hah would probably work fine on the heater, assuming that heater didn't overload it. :P A heater and a incandescent light ... same thing.
06:02:04gmaxwell:kanzure: if you run across anything more like that presentation, please send to me.
06:03:14op_mul:I think most of the cans maxed out at 100W or so,the spots were arc lamps so they had to have their own power supplies isolated from everything else.
06:03:28phantomcircuit: I got to watch people learn over and over again that plugging kilowatts of lights into a single dimmer output will cause it to ignite.
06:03:29phantomcircuit:wat
06:03:32phantomcircuit:who does that??
06:05:18op_mul:people with double adaptors who don't realise chaining them 5 deep is going to cause problems.
06:10:17op_mul:getting back on the bitcoin topic for a moment; there was some talk last night that the bitstamp theft might have been a RNG issue, so I took the liberty of checking (it wasn't)
06:11:17op_mul:in the process of that, it looks like some genius has written a new wallet which does a complete Sony with it's transactions. the EC nonce it uses is totally random (as far as I can tell), except it uses the same one for every signature in it's transaction.
06:11:19gmaxwell:op_mul: another reason why using multicable/strand lighting cables is helpful (e.g. socapex)...
06:12:13gmaxwell:op_mul: possible software design flaw might be reading randomness in only once per transaction.
06:12:18op_mul:they've not leaked any keys because they don't ever reuse addresses.
06:12:44gmaxwell:op_mul: ruse may be possible for an attacker to induce. (e.g. send them dust)
06:13:09op_mul:gmaxwell: I was thinking they might be forking to sign, and each thread gets the same RNG state. I think that's similar to the blockchain.info webworker bug.
06:13:57gmaxwell:forking to sign in one transaction though. ugh. signing is not that slow on anything which has multiple processors, the overhead of forking should dwarf even remotely competent signing code.
06:14:11gmaxwell:but yea, sure, even if was slower, someone might do that.
06:14:11op_mul:here's a sample, anyway. http://webbtc.com/tx/5808f8e297759c47a0ef34fd884051e854c528a94c0e6c460bc4537c2e736bac
06:14:43gmaxwell:another possiblity is that its an implementation of detrandomized DSA that depends only on the message.
06:15:24op_mul:that's scary.
06:16:09gmaxwell:or something like &privkey vs &privkey[0]
06:16:25gmaxwell:(e.g. address of the private key instead of the private key)
06:16:59op_mul:the outputs come from some weird ass stuff too, like this. http://webbtc.com/tx/708e6efa39d626575b2f32e4fb32e5f390df1c0529d4cdab669530abb5e20994
06:19:53op_mul:seems more like it's someone messing around than a bug, then.
06:22:29gmaxwell:Want the private keys there?
06:22:58op_mul:huh?
06:23:07gmaxwell:oh, read the wrong line. thought I'd found the discrete log of that nonce.
06:23:56gmaxwell:another possible failure mode: just passing the message hash as the nonce.
06:25:05op_mul:I suspect you probably don't *want* to go looking for that sort of thing.
06:26:51gmaxwell:well I want to understand the failure mode in order to avoid it.
06:29:38op_mul:having a test suite is a nice way to sidestep it completely
06:30:06op_mul:this particular issue, that is.
06:30:21gmaxwell:maybe. Unclear. I could imagine testest that this would pass.
06:30:42phantomcircuit:op_mul, more likely execve to a signing application
06:30:43phantomcircuit:fun
06:31:12op_mul:doesn't core have test vectors? if I was writing a wallet I'd be expecing to see byte for byte matches with core (now that it's RFC6979)
06:31:31gmaxwell:Yes, but they're at the wrong level.
06:31:55gmaxwell:It tests the RFC6979 hmac, but you could mis-apply it (e.g. by not giving the private key as an input)
06:32:22gmaxwell:s/hmac/CSPRNG/ really
06:33:05midnightmagic:op_mul: to trigger it you must specifically put it under strong memory pressures. Go itself can segfault. Apparently.
06:33:25gmaxwell:https://github.com/bitcoin/bitcoin/blob/master/src/test/crypto_tests.cpp#L251
06:34:32op_mul:"RIPEMD160 is considered to be safe" < oh no, you never ever ever write that
06:34:51phantomcircuit:op_mul, lolin
06:34:54gmaxwell:safe for what?!
06:34:57phantomcircuit:"that cant happen!"
06:35:13op_mul:that's the sort of thing that will get quoted in a snarky blog post eventually
06:35:36phantomcircuit:gmaxwell, TestRIPEMD160("RIPEMD160 is considered to be safe", "a7d78608c7af8a8e728778e81576870734122b66");
06:35:39phantomcircuit:ha
06:36:01gmaxwell:op_mul: ah there is full system tests in key_tests.cpp.
06:38:16gmaxwell:if anyone feels like making your first contribution to libsecp256k1, copying in those 6979 tests would be nice.
06:39:31Taek:tempted to bite
06:40:10Taek:could it be done in a weekend?
06:40:11midnightmagic:which tests?
06:41:02midnightmagic:you mean from the rfc?
06:41:05gmaxwell:Taek: it would only take sipa or I a few minutes; someone who was unfamilar with with bitcoin core and libsecp256k1 might spin for a couple hours.
06:41:26gmaxwell:midnightmagic: there aren't tests for the full system with secp256k1 in the RFC. But there are in bitcoin core.
06:42:29gmaxwell:the library has tests for the 6979 implementation, but no fixed tests of "this message,key should give this signature"
06:43:13Taek:sounds reasonable enough to me. Can I expect it to be available still by this weekend?
06:43:27gmaxwell:I'm not planning on doing it right now.
07:49:56lclc_bnc:lclc_bnc is now known as lclc
08:08:32wumpus:moving the tests for RFC6979 from bitcoin core down to secp256k1 makes a lot of sense
08:09:23gmaxwell:wumpus: no harm in having them in both places. RFC6979 CSPRNG tests are already in libsecp256k1.
08:09:24wumpus:we're going through some paces to add a test interface to a three-line wrapper around secp256k1, e.g. https://github.com/bitcoin/bitcoin/pull/5506/files#diff-54e45787cdb38fd8469fe6d1c48e67beR76
08:09:50gmaxwell:but perhaps the tests in bitcoin core should just be whole transactions.
08:10:21gmaxwell:oh yuck yea what you just linked to shouldn't be there I think.
08:10:48wumpus:my initial intuition was 'move this to src/tests', but that would move secp256k1 specific code to our tests, so in that case you could just as well do the testing in secp256k1's tests themselves
08:11:03wumpus:*some* signing tests need to be in bitcoin core, sure
08:11:10gmaxwell:We should just test some "message","key" tuples in bitcoin core's tests. (key_tests.cpp)
08:11:44gmaxwell:I already added some fairly extensive nonce function tests to libsecp256k1, just no static vectors.
08:12:27gmaxwell:https://github.com/bitcoin/secp256k1/commit/941e221f66213d092869ff95d313de8b18277028
08:12:29wumpus:yes, now that it's all deterministic we can make it a data-driven test
08:13:09gmaxwell:I didn't add static vectors because I would have had to author some, and authoring them using libsecp256k1 didn't seem prudent.
08:13:22gmaxwell:But I'd forgotten that sipa had already done this with different code.
08:13:57sipa:well you can revert to bitcoin core aftrr deterministic signing, but before the secp256k1 merge
08:13:57wumpus:I was also a bit confused about having a function pointer to generate nonces on the signing interface, is this just to accomedate this testing interface?
08:14:27sipa:wumpus: it's for bit-by-bit compatibility pretty much
08:14:48sipa:wumpus: oh
08:14:49wumpus:sipa: from a testing viewpoint that's wise
08:15:07sipa:no, it is to prevent callers from needing to iterate
08:15:29sipa:and at the same time provide a safe default inside the library
08:15:33wumpus:but I mean, being able to provide a different nonce generation function in the first place
08:15:34gmaxwell:wumpus: It also lets you get exact behavior with different nonce functions. (or, for example, if you care about speed, some non-spec tweaks can basically double the signing speed, rfc6979 is slow. :( )
08:15:35sipa:but still be flexiblr
08:15:53wumpus:ok, clear
08:16:10gmaxwell:as far as testing goes, it's also the only way to make the varrious edge conditions like the nonce being zero or bigger than the order testable since we can't generate sha256 output with that behavior.
08:16:13sipa:right, computing the nonce in libsecp is now like 20% of the total signing time
08:16:56sipa:gmaxwell: double is an exaggeration :)
08:17:04sipa:maybe 30%
08:17:36gmaxwell:sipa: well it's like 100x higher latency than the low latency signing scheme I came up with (though you can't achieve that via the function pointer, alas.) so it seem huge to me regardless. :)
08:17:51sipa:right
08:19:33gmaxwell:wumpus: also, some people might reasonably prefer to have an additional random input (this is actually accomidated in 6979), because they don't quite trust the 6979 construction without it.
08:20:22gmaxwell:hm. we could probably make the stock 6979 function in libsecp256k1 take an additional random input if data!=NULL.
08:21:35wumpus:gmaxwell: ok, fair enough. It does allow for a lot more flexibility, but I was just surprised as you're usually the first to worry about function pointers, 'but what about CFI!' :-) This is a place where c++'s templates would have been nice.
08:22:03gmaxwell:yea, I hate function pointers.
08:22:17gmaxwell:I did whine a bit when sipa proposed the interface.
08:22:41wumpus:ok, I just missed that :)
08:22:58gmaxwell:And was thinking of an ifdef to break the function pointers. (E.g. just require the null option) But ... otoh, I also hate unreachable untestable code.
08:23:14gmaxwell:And I don't think testing it just one time is adequate.
08:24:29wumpus:given what is possible in C I do agree this to be the best solution, requiring iteration on the client side would be a recipe for disaster
08:24:39gmaxwell:In our actual usage the function pointer should always come from something const. (or in the case in bitcoin core its set to null) so I don't see how you can intercept control flow from that, in practice.
08:25:18sipa:gmaxwell: perhaps that warrants a comment in the .h file
08:25:30sipa:"best practice: ..."
08:25:46wumpus:full-program optimization may help here - it could get rid of the pointer in the resulting code
08:25:50benten:Are there many more corner cases for nonce function tests besides those captured in the current tests you linked earlier? Looks to be about 20 tests
08:26:07benten:20 corner cases rather
08:26:40sipa:wumpus: maybe we'd seed to split the testcase and normal signing method out in key.cpp for that
08:27:30gmaxwell:benten: I haven't checked, but the tests there should achieve 100% branch coverage at least. There are not that many branches. One case we don't have a test for is what happens when none of the 0..MAX_UINT counter values is an acceptable nonce... becaues we go into an infinite loop in that case, which is generally considered bad form for a test to do. :)
08:27:47gmaxwell:(though I actually did test that case locally)
08:28:11benten:With ^C standing by?
08:28:17wumpus:sipa: yes
08:29:07gmaxwell:benten: well I just checked all 2^32 and made it quit on the second time it hit 1. ("test all the things.")
08:29:26sipa:wumpus: i agree that it would be nice to have the testcase code not in key.cpp though
08:30:52sipa:but i see no clean way to do so
08:31:00SubCreative:SubCreative is now known as Sub|zzz
08:31:16sipa:except by passing a nonce generation function into CKey::Sign :D
08:32:11benten:gmaxwell: neat, just reading the test code now (we can always leave the infinite loops to ethereum)
08:34:52gmaxwell:sipa: if we allow the 6979 function to accept an 'additional data' input, then your tests that need a randomized signature could use that instead of the external iteration. (uh, I forget what your update to the library rfc6979 pull does)
08:35:40sipa:gmaxwell: i really like the tests being deterministic
08:37:04gmaxwell:sipa: I mean they could use that to put in a determinstic additional input.
08:37:18sipa:ah!
08:38:14gmaxwell:(I think if I were an external reviewer I would be pretty uncomfortable with 'nonce += test_case' code we have; with klaxons going off about linearly related nonces.
08:38:17gmaxwell:)
08:39:25sipa:that's an interesting suggestion
08:39:36gmaxwell:sipa: 6979 3.6 has a spec on additional inputs to the function.
08:40:01sipa:oh no; 7 more compression function invocations i assume?
08:40:18gmaxwell:lol probably one more for a 256 bit input.
08:40:43gmaxwell:maybe zero more, I'd have to add up the sizes to see if there is room.
08:43:14gmaxwell:looks like it's one more by my figuring.
08:45:27sipa:meh ok
08:46:17gmaxwell:yea, well, sorry, I dunno what it is with people designing protocols and not counting the compression function invocations.
08:49:29benten:Are there any immediate plans for SIGOP count limits per block to increase?
08:50:21sipa:that would be a hardfork...
08:50:27gmaxwell:No.
08:51:16gmaxwell:No one has ever suggested it, AFAIK. I can't think of a reason.
08:51:27gmaxwell:Other than the fact that the existing limit is completely braindamaged.
08:51:38benten:It's 20k now, right?
08:51:40gmaxwell:(it actually doesn't achieve its intended purpose)
08:52:08gmaxwell:benten: yes.
08:52:41sipa:in combination with the standardness rules it actually does
08:52:59gmaxwell:well standardness can do lots of things...
08:53:26sipa:hmm?
08:53:49gmaxwell:I actually think thats impossible to reach with actual usage (e.g. not with stupid things like OP_DUPing signatures or failing to use p2sh), considering that every sigop involves encoding at least 72 bytes of data.
08:54:34benten:care to explain a bit more about its failure to address the intended purpose... since if you had 2,500 transactions, each with 10 outputs on average (5x the average I know), you'd need it to increase, and p2sh buys us time for that but theoretically we'd need need it for 1000's of tx/sec, if thats ever to occur?
08:54:34gmaxwell:sipa: the limit as is doesn't add anything you couldn't achieve through standardness.
08:55:09gmaxwell:benten: there are no sigops used for a p2sh output at all.
08:55:32sipa:wow no
08:55:45benten:p2sh provides a more precise means to count
08:55:49sipa:i think several comments here are not relevant or confusion
08:56:10gmaxwell:benten: that has nothing to do with _outputs_.
08:56:11benten:go ahead and clear it up for me, thanks :)
08:56:13sipa:first: for the current block size, the sigop limit is not a problem at all
08:56:43sipa:if we increase the block size, obviously the sigop limit will need to increase too; as both are a hardfork that is nonextra difficulty
08:56:55sipa:*no extra
08:56:59benten:right, doing one mine as well do the other you mean
08:57:44gmaxwell:benten: "1000s of tx/sec" would be the end of bitcoin as a decenteralized system, I'm completely uninterested in that. You can't hypotheize about it, but doing so will only piss me off.
08:57:46benten:i dont see any immediate need either, but was curious if I was not aware of one
08:57:57gmaxwell:er _can_ hypotheize.
08:58:04sipa:second: for non-p2sh we only count sigops in outputs, not in inputs, so they are not a good proxy for cpu time spent in validation, which is what they are intended to protect against
08:59:19jbenet_:jbenet_ is now known as jbenet
08:59:34benten:the 20k limit was intended to prevent dos, right?
08:59:37gmaxwell:benten: and for p2sh we do not count them in outputs at all (there are none in a p2sh output), so with p2sh it's impossible to hit the limit via outputs.
09:00:34gmaxwell:benten: its intended to prevent a miner from pushing all other nodes/miners off the network by having blocks that contain tons of e.g. OP_DUP2 OP_CHECKSIG that everyone spends minutes of cpu validating. (this attack was actually performed against testnet)
09:01:55gmaxwell:Unfortunately the non-p2sh test checks in the wrong place. It checks outputs, where they're not even executed. It should be checking the scriptpubkeys of transactions in blocks. e.g. an excessive sigop pubkey is just unspendable.
09:01:56benten:but due to the limit being pretty high (20k), it fails to actually prevent dos, since a miner could still do this today?
09:02:17sipa:no
09:02:25benten:ah
09:02:31gmaxwell:no, 20k isn't that high, even with slow code it's maybe only a couple seconds.
09:02:35sipa:it simply does not count something that corresponds to verification time
09:03:01sipa:it counts "creation of outputs that will have some combined time to verify"
09:03:04benten:gmaxwell, is moving the non-p2sh test check to the better place non-trivial?
09:03:23gmaxwell:It's just not applied to signatures, which means you could still construct a block that spends a minute or so on signature validation.
09:03:25sipa:it does not count "verifying outputs"
09:03:44sipa:benten: it's a softfork to do it sanely, a hardfork to do it right
09:04:03gmaxwell:and today we've been less concerned with attacks by miners, though perhaps wrongly.
09:04:10gmaxwell:(than we were in 2010/2011)
09:04:39benten:What would a miners incentive be to attack in that way?
09:05:16rajaniemi.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
09:05:16rajaniemi.freenode.net:Users on #bitcoin-wizards: andy-logbot iang NewLiberty rusty eslbaer_ hashtag askmike benten cbeams wallet42 beamlaser jaekwon GAit MoALTz__ shesek coiner RoboTeddy TheSeven op_mul moa e1782d11df4c9914 PaulCapestany Dr-G3 ryanxcharles d1ggy_ DougieBot5000 justanotheruser c0rw1n Adlai` napedia hollandais Guest8623 Guest51744 rfreeman_w waxwing Starduster Tjopper nuke1989 LarsLarsen Muis kumavis michagogo artifexd lnovy cfields nsh btc__ jbenet rasengan gavink hguux_
09:05:16rajaniemi.freenode.net:Users on #bitcoin-wizards: coutts Hunger-- Meeh yrashk NikolaiToryzin Cory [d__d] a5m0 adam3us jgarzik Keefe berndj fluffypony morcos hashtag_ tacotime Dyaheon- spinza catlasshrugged CodeShark JonTitor dgenr8 cluckj bosma Logicwax samson_ OneFixt maaku Emcy copumpkin mortale stonecoldpat sl01 optimator atgreen koshii wiz null_radix ebfull pigeons pi07r_ Fistful_of_Coins roasbeef_ Sub|zzz hktud0 grandmaster luny Transisto BigBitz [\\\] mappum epscy gnusha poggy _Iriez
09:05:16rajaniemi.freenode.net:Users on #bitcoin-wizards: otoburb mr_burdell s1w go1111111 HM2 Apocalyptic sdaftuar btcdrak CryptOprah jcorgan wizkid057 forrestv Luke-Jr tromp_ PRab Greed lclc harrow @gmaxwell gavinandresen isis iddo sadgit brand0 eric Krellan @ChanServ jaromil petertodd tromp helo v3Rve midnightmagic espes__ butters nickler ahmed_ warren fenn phantomcircuit Graet kanzure MRL-Relay Graftec bobke huseby BananaLotus coryfields BlueMatt Eliel nanotube Anduck AdrianG gwillen wumpus
09:05:16rajaniemi.freenode.net:Users on #bitcoin-wizards: dansmith_btc heath bbrittain DoctorBTC EasyAt starsoccer danneu catcow TD-Linux ryan-c smooth mmozeiko Alanius asoltys_ K1773R comboy andytoshi lechuga_ warptangent Guest38445 kinlo azariah yoleaux sneak sipa livegnik burcin Taek throughnothing crescendo so phedny BrainOverfl0w
09:05:16petertodd:benten: miners don't actually have an incentive to broadcast their blocks to >30% of hashing power if their goal is to get more blocks than the competition
09:05:28gmaxwell:Maybe, you could potentially cause orphaning in other miners, but probably not for long as it would seriously piss of people and quickly trigger adding the additional rule to stop it.
09:05:38gmaxwell:It's not the most interesting thing a malicious miner could do at least.
09:05:46benten:petertodd, that would seem to be the case.
09:06:42gmaxwell:s/of people/off people/
09:06:54petertodd:benten: you mean it would seem they do have that incentive or don't?
09:07:09benten:they don't have the incentive
09:07:13petertodd:benten: yup
09:07:30sipa:just 30%?
09:07:48petertodd:sipa: yeah, maths on the mailing list somewhere... I originally thought 50% but turns out I was wrong on that
09:07:50benten:we've seen miners avoid full blocks in many cases, but not the other way around. Although gmaxwell's description of causing orphans is an interesting thought, although risky beyond worth.
09:08:00petertodd:sipa: basically, the competition has to find two blocks and you just need to find one
09:09:38gmaxwell:sipa: it's basically the same state has the 'selfish mining' paper with a perfect communications advantage (that is, if you can give your block to 30% and not have it go further)
09:10:06gmaxwell:It actually causes you to have a lot of orphan blocks, but if you persist and all stays constant (e.g. outside hashrate not going up) you'll offset after the retarget.
09:10:15sipa:33.3...%, no?
09:10:36petertodd:sipa: yeah, that might be the exact number, ~%30
09:10:50gmaxwell:it's probably even somewhat more than that, including other effects, it's only a third with frictionless, latencyless, etc.
09:11:18sipa:sure; just questioning my understanding if it's a more complex expressionnthan "1/3"
09:11:34gmaxwell:it's also complicated to analize if multiple parties do this at once.
09:12:21gmaxwell:because if you're already one behind (but you don't know it), you need a larger share of the other people who are one behind to have any hope of catching up.
09:13:08petertodd:sipa: I derived an equation for it at one point - I may have even done it as an integral to try to model propagation - kinda forget...
10:43:16MoALTz__:MoALTz__ is now known as MoALTz
11:33:25gmaxwell:8117d2fc3223200de82e7f45d96a744d0097965c162779c433b9bd2802d92f95
14:15:26Adlai`:Adlai` is now known as adlai
17:42:20lclc:lclc is now known as lclc_bnc
19:50:21cryptokeeper:cryptokeeper has left #bitcoin-wizards
21:17:33lclc_bnc:lclc_bnc is now known as lclc
21:35:36gmaxwell:Coinbase, blockchain, chain, what are some other companies that have taken the names of parts of Bitcoin?
21:36:55gmaxwell:(I'm trying to make a joke on reddit and I need one more.)
21:37:03gmaxwell:Seems like thegensisblock changed their name.
21:41:53op_mul:greenaddress?
21:42:30Alanius:bitstamp
21:42:45op_mul:huh?
21:42:58Alanius:well it's a clear word play on bitcoin
21:43:04sipa:sure, all of them are
21:43:30phantomcircuit:gmaxwell, i propose that all references to chain imply that they're talking about a physical chain
21:43:31op_mul:he's looking for more technical term dilution though
21:43:35phantomcircuit:possibly a gold chain
21:43:37gmaxwell:in the case of coinbase and blockchain in particular they're using actual parts of the system which casually produces confusion.
21:44:06sipa:anything with 'mining' in the name?
21:44:18phantomcircuit:genesismining maybe
21:44:29phantomcircuit:brb forming new company "multisig"
21:45:39zooko:* zooko laughs.
21:46:04sipa:is there a talk 'the bitcoin address' somewhere?
21:46:45Eliel:sounds like a mining strategy simulator might be useful. Make it a competitive sport and get data out of it as a result :P
21:46:45pigeons:not really a part of the system but there is the confusing "greenaddresses" v2
21:47:03phantomcircuit:sooo
21:47:11phantomcircuit:maybe we should get rid of "CWalletTx::GetAmounts: Unknown transaction type found"
21:47:12gmaxwell:http://www.reddit.com/r/Bitcoin/comments/2rji9f/looking_before_the_scaling_up_leap_by_gavin/cngl0zw?context=3
21:47:14GAit:multisigna
21:47:21phantomcircuit:i have about 20GB of debug.log full of that
21:48:00gmaxwell:phantomcircuit: hm. my p2pool mining should result in that but I don't remember being annoyed by it recently; I thought we already got rid of that.
21:48:02op_mul:heh, you reject raw multisig?
21:48:02phantomcircuit:GAit, would get sued by cigna
21:48:22phantomcircuit:it's not rawmultisig it's op_return weirdness
21:48:24gmaxwell:GAit: :)
21:48:32pigeons:yeah there is a mining pool called "p2pool"
21:48:37pigeons:that's confusing
21:49:15gmaxwell:pigeons: oohh god one, added.
21:49:33phantomcircuit:is it at least a passthrough for p2pool?
21:49:46pigeons:it is a p2pool node i believe
21:49:50op_mul:supposedly.
21:50:00pigeons:with all the fees of a centralized pool
21:50:06gmaxwell:Maybe it is! I don't know if there is a way to tell without finding a share on it.
21:50:10sipa:and all variance of p2pool...
21:50:23gmaxwell:sipa: presumably it lowers the variance relative to normal use of p2pool.
21:50:27sipa:ah
21:50:29op_mul:gmaxwell: the evil thing is that nobody would notice even it was skimming.
21:50:47op_mul:gmaxwell: no, it's just a normal p2pool node with high fees. not a subpool like ognasty's.
21:51:01gmaxwell:ugh.
21:51:01GAit:for satoshi there's also his famous vault, closing down soon
21:52:06kanzure:on a scale of one to ugh, how broken is this coin selection implementation? https://github.com/BitGo/BitGoJS/blob/66fb0c833124e5561bf8d3477c4d632e488d1180/src/transactionBuilder.js#L121
21:52:51op_mul:kanzure: about a blockchain.info I think.
21:52:57kanzure:seems to be "pick the first inputs until the total is passed, then send the change back to yourself"
21:53:12kanzure:op_mul: please elaborate
21:53:28gmaxwell:kanzure: first by what criteria?
21:53:29op_mul:kanzure: heap of shit.
21:53:47kanzure:seems to be any unspents at all
21:53:54kanzure:if that's what you mean by criteria
21:54:11gmaxwell:kanzure: right but say they're sorted from smallest to largest; it's easy to make it unable to produce a transaction if so.
21:54:30op_mul:gmaxwell: seems to be sorted by whatever their API returns, I don't see an explicit function for it.
21:54:49gmaxwell:if it's sorted from largest to smallest it will always work, but will tend to grind down coins into tiny dust, which is pretty bloaty.
21:55:16gmaxwell:not that bitcoin core is amazing on that front.
21:55:24op_mul:gmaxwell: horray, lots of output merges!
21:55:48op_mul:I put A and B and C and D together and now my privacy is down the shitter :3
21:56:19gmaxwell:op_mul: yes, well bitcoin core's approach is quite good for privacy if you don't reuse addresses (doh.)
21:57:34GAit:gmaxwell: is there debate as to which is the best algorithm? i would like to have a couple that optimize for different things, for instance, optimizing for expiring nlocktimes first, or optimize for closes match, or lowest fees, etc
21:58:35phantomcircuit:kanzure, i had a conversation with someone from bitgo who didn't seem to understand that HD wallets are a tree and not a chain
21:58:44phantomcircuit:which was a little uhhh
21:58:45phantomcircuit:yeah
21:59:00gmaxwell:GAit: well I can talk to particular criteria. E.g. lowest fees for this transaction is sometimes bad because it can make you pay more fees in the future (when fees are likely more expensive)
21:59:01GAit:well i don't think they offer sub items with coincontrol
21:59:46GAit:(we only go one level deep)
22:00:06gmaxwell:GAit: I'd not considered it in the face of nlocktimes funds before.
22:01:02GAit:which is bad for privacy i assume
22:01:13GAit:but i think people should be able to pick
22:04:50kanzure:op_mul: thank you
22:05:10op_mul:for what?
22:05:43kanzure:on-demand opinion generation and code review
22:08:30op_mul:kanzure: it makes me cry, I swear
22:08:44kanzure:we can cry together! it will be fun. and weird.
22:09:04phantomcircuit:mostly weird
22:09:26op_mul:> operating environment gives a cryptographically secure RNG
22:09:43op_mul:> oh lets make our own RNG using it, and then feed it through RC4!
22:09:47op_mul:every fucking time.
22:09:58phantomcircuit:i dont get why people use rc4
22:10:14sipa:hey hey we're not even at rc2 yet!
22:10:16sipa:*ducks*
22:10:19phantomcircuit:otoh attempts at building entropy pools also seem to fail
22:10:21kanzure:is that from the file?
22:10:31op_mul:because some decade old javascript thing used it.
22:11:56op_mul:kanzure: no, if you go deep enough bitgo uses the SJCL libraries random, which does similar insanity and doesn't fail hard if there's no proper entropy source.
22:17:43op_mul:"/* use a cookie to store entropy." well I'm glad that function isn't used.
22:18:29kanzure:linkz pls
22:19:41op_mul:https://github.com/BitGo/BitGoJS/blob/eda4c3f19c609e9d7e78099c253bc51ae6818a55/test/bitcoin/random.js
22:19:45op_mul:https://github.com/BitGo/BitGoJS/blob/eda4c3f19c609e9d7e78099c253bc51ae6818a55/src/bitcoin/jsbn/rng.js
22:19:50op_mul:https://github.com/bitwiseshiftleft/sjcl/blob/master/core/random.js
22:20:30op_mul:there's also a different "RNG" function here
22:20:31phantomcircuit:op_mul, that's about right
22:20:35op_mul:https://github.com/BitGo/BitGoJS/blob/eda4c3f19c609e9d7e78099c253bc51ae6818a55/src/bitcoin/crypto-js/Crypto.js#L33
22:21:04op_mul:phantomcircuit: right?
22:21:41phantomcircuit:op_mul, function get_random_bytes(count){throw "this is javascript what are you doing?";}
22:21:48GAit:uh uh
22:23:56lclc:lclc is now known as lclc_bnc
22:24:58op_mul:phantomcircuit: but see, because there's no obvious break in that RNG (but it's fragile as fuck), there's no reason it'll ever be changed. awesome!
22:25:40phantomcircuit:there seems to be an insistence on doing javascript crypto
22:25:42phantomcircuit:i dont get it
22:26:09phantomcircuit:the language and most common implementations are almost uniquely poor candidates for crypto
22:27:14op_mul:phantomcircuit: it's hip and trendy. move fast and break things!
22:27:28Apocalyptic:yeah, it's web 2.0
22:27:35Apocalyptic:everything has to be a web app
22:27:43GAit:trivial to get it wrong in JS ecosystem but is also trivial to get it wrong in C if you are not careful in first place no? i've seen hardcoded seeds before
22:28:11GAit:i mean.. sony.
22:28:55op_mul:in javascript it's easy to make mistakes which aren't picked up. the blockchain.info failure *should* have crashed if it was written in something sane. instead it carried on as if nothing was wrong.
22:29:13gmaxwell:GAit: there is a lot less hidden complexity in C than JS. In sony's case they were doomed via an inadequate understanding of the requirements. The examples we've seen in JS have not required any misunderstanding of the requirements.
22:31:12gmaxwell:and I don't mean to pick on bc.i, they're not alone. There was another one where a change in an underlying library meant that the hash function that should have taken the message/private key always got the string "Array[]" or something like that, because an interface changed from taking an string in an array to taking a string only or something along those lines and the failure was transparent.
22:32:37GAit:i agree, i rather have languages and frameworks geared for this.
22:32:55GAit:currently on some platforms you are kinda limited though
22:33:16GAit:android/java or jni bindings to come C library
22:35:46gmaxwell:there really is no good language for this stuff invented yet, IMO. The nearest attempts so far fail at usability.
22:38:54GAit:web becomes immediately not so bad with the right/perfect hardware wallet (not sure if invented yet)
22:40:15gmaxwell:Yea, I've given a fair amount of thought to having a HW signer... lets you leave 95% of the software in whatever tool you want, while the 5% of the stuff which must be really robust is in the dedicated wallet.
22:41:10GAit:and as everything, things need some iterations before it reaches maturity, you can't just jump to perfection, it takes time and meanwhile you leave the space with the earlier tools which can be trouble. Start with one HW and then add support for more than one to allow multi company ones
22:43:01op_mul:trouble with iteration is you risk burning out your customers.
22:44:26GAit:i guess, especially if you have to make fundamental changes
22:44:45GAit:so far i've been lucky with choices, at the cost of less users
22:45:50GAit:tuning for security(and privacy for what a cosigner can be private) first and then increase user, which is always an odd compromise to do, where do you draw the line, especially medium security improves the average wallet security more than super hardcore security
22:46:35GAit:gpg: i rest my case
22:47:26op_mul:I'm trying to come up with a way to convince redditors to use floppy disks as high security storage.
22:47:50GAit:may as well call it wallet roulette
22:48:40op_mul:Abort? Retry? Fail?
22:48:54GAit:gave me a good chuckle
22:49:52op_mul:problem is it's probably *better* storage than the things poeple have come up with regarding "paper wallets". they've all been taught to run this one weird javascript app on their computers and disconnect from the internet.
22:49:59op_mul:utter insanity, the lot of it.
22:51:59phantomcircuit:"one weird trick, they hate it!"
22:52:09phantomcircuit:yes... yes "they" do
22:56:25samson2:samson2 is now known as samson_
22:56:46GAit:indeed i'm afraid of the paper wallet culture, the js pages randomly downloaded from many places and guides and the sending the change back to the addresses
23:24:58gavinandresen:gavinandresen is now known as Guest90723
23:41:47waxwing_:waxwing_ is now known as waxwing
23:44:49bpd:bpd has left #bitcoin-wizards
23:47:46kanzure:hi bendavenport
23:47:55bendavenport:hi
23:51:59bendavenport:kanzure: heard you had some issues with bitgo input selection
23:52:50kanzure:nah, i was using bitgojs coin selection as an example of alternative implementations to https://github.com/bitcoin/bitcoin/blob/33d5ee683085fe5cbb6fc6ea87d45c5f52882232/src/wallet.cpp#L1232
23:54:33bendavenport:the algo you point to is pure client code, so you're not really seeing much in the way of an algorithm. the server determines what order to send you the inputs in
23:55:27kanzure:earlier in the scrollback someone else speculated as much (re: server-side ordering)
23:56:31bendavenport:that said, our server-side ordering is very simplistic currently, returning solely oldest to newest
23:56:57bendavenport:however, client can always request full list of unspents and apply arbitrary input selection algo
23:57:34kanzure:hi mbelshe
23:57:44mbelshe:hi kanzure
23:58:37mbelshe:saw your comments about input selection.
23:59:37mbelshe:happy to have more eyeballs on it. feel free to reach out to me directly at any time!
23:59:57kanzure:yep understood, i've been busy on other rabbit holes
23:59:59kanzure:one alternative i've been pondering lately https://github.com/bitcoin/bitcoin/pull/5524