00:00:01nwilcox:gmaxwell: (I haven't read the wiki link yet...) I was imagining SPV-style PoW verification as a "strong bet" against double spends, so I don't quite follow your comment.
00:00:06CodeShark:wouldn't it be sufficient for a sender to prove that the output they've created has witnesses all the way back to valid coinbase transactions?
00:00:28nwilcox:* nwilcox skims wiki page.
00:04:16gmaxwell:nwilcox: if you're trusting the hashpower then you can stop at just whats in bitcoin today. No more is needed. But _why_ trust the hashpower? it's a necessarily anonymous self-selecting group (you hope) of parties. :P There is a hope that the hashpower is economically incentivized to conform to the protocol because if they don't they'll get caught and their blocks ignored; but there needs to be
00:04:22gmaxwell:a mechenism for that to actually happen. :P
00:04:32gmaxwell:CodeShark: no because that doesn't show the absense of a double spend.
00:05:49nwilcox:gmaxwell: Well, yes, removing the reliance on PoW assumptions would be awesome. I wasn't considering that.
00:06:45gmaxwell:K. well if you're willing to make them, then bitcoin's SPV ought to be enough.
00:07:02nwilcox:CodeShark: That sounds sufficient to me *iff* you assume PoW validation protects against double spends, right?
00:08:13gmaxwell:it's totally wasteful though, because you don't have to do that at all if you're trusting pow, just check the initial membership. Also I suggest traversing the graph of some coins sometime, for a great many you are rapidly casually connected to a signficant fraction of all transactions. :)
00:08:39gmaxwell:(thanks betting sites. :) )
00:10:59nwilcox:I blame non-hoarders for tightly interweaving the transaction graph.
00:12:14gmaxwell:nwilcox: there have been a number of on-blockchain betting services that their specific design required them to jointly spend coins from multiple sources that are atypically good at comingling histories, esp since moast of the transactions to them are numerous tiny amounts..
00:12:19gmaxwell:er most
00:20:34CodeShark:gmaxwell: what about schemes that incentivize demonstration of double-spend proofs?
00:21:40CodeShark:point is construction of the proof could be offloaded to specialists who make it their business to detect them
00:21:57CodeShark:yes, there could be a conspiracy to withhold them
00:22:36CodeShark:but assuming the incentives are not misplaced and the system is sufficiently decentralized, can't it be made at least extremely difficult to enforce withholding?
00:24:38CodeShark:actually...if you were to keep track of the total amount of existing coins on the network, wouldn't that also be a way to track double-spends?
00:24:51CodeShark:or detect them, rather
00:25:04gmaxwell:CodeShark: I am disappoint at your lack of reading sometimes, it makes talking with you a lot less fun.
00:25:10CodeShark:what should I read?
00:26:11gmaxwell:What I said in the discussion here (and #bitcoin-dev!) if I say something you don't get please ask me to cliarify. State commitments are sufficent to make the checks efficient (maybe necessary, not quite sure, e.g. commitments to a hashtree over the utxo set.
00:26:28gmaxwell:Then there are compact proofs for both double spending and spending a non-existing coin.
00:27:58CodeShark:the thing you seemed to be concerned about was withholding attacks
00:28:13CodeShark:everyone playing dumb
00:29:17CodeShark:and I, admittedly, hadn't clicked on your proofs link
00:29:45gmaxwell:it's okay, it's just that you seem to be reinventing this I'd just mentioned! :)
00:31:20CodeShark:so then what's the downside to state commitments?
00:31:37nwilcox:miner overhead?
00:31:57CodeShark:presumably we need a scheme that does not require even miners to perform full validation
00:32:22CodeShark:so asymptotically, it would actually reduce miner overhead
00:32:41gmaxwell:CodeShark: it's a large overhead for verifiying, because now you have to verify that the stat updates are faithful, which needs a bunch more random IO.
00:33:08gmaxwell:"asymptotically" is a good way to talk yourself into a lala land of irrelevance though. :)
00:33:32CodeShark:well, even for finite systems that are growing exponentially :)
00:33:34dgenr8:i don't read either. what stops somebody from presenting a perfectly good UTXO proof from two years ago when it was spent an hour ago?
00:34:08gmaxwell:basically with the right commitment sets you can avoid having anyone store the state, but then you end up carrying around membership and update proofs for each input (hashtree fragments) and for our current utxo set size they end up being a couple kilobytes.
00:35:14gmaxwell:"so great, you've saved a $1 one time disk space cost for a $50/mo bandwidth cost. :P (or what have you, random numbers -- the point I'm making is that asymptotics can be misleading)
00:36:06CodeShark:I'm talking about a model that can survive exponential growth in usage (at least for a while)
00:36:36CodeShark:if the usage were to remain constant, then yes, it's silly
00:36:46dgenr8:CodeShark: with full-node security, or SPV?
00:37:01CodeShark:with something a lot better than SPV - but not necessarily deterministic
00:37:12CodeShark:let's say with negligible failure rate
00:37:48CodeShark:or with mitigated failure
00:38:19dgenr8:CodeShark: akin to SPV with arbitrarily many peers?
00:38:31gmaxwell:as I said, these are not new ideas; you are _years_ behind on this, please go actually crunch the numbers on what it looks like and you'll see the tradeoffs are not attractive at least anywhere within the realm of reasonable loads (e.g. supportable on today's hardware)
00:38:54nwilcox:"CodeShark> presumably we need a scheme that does not require even miners to perform full validation" Woah... so, relaxing global consensus?
00:39:07CodeShark:gmaxwell: forgive me for being behind - I have not seen the literature with these numbers and can't possibly follow all the forums and mailing lists and read everything back to the beginning of time
00:39:10gmaxwell:dgenr8: SPV security is not really improved by 'arbitrarily many peers'.
00:39:34CodeShark:so if you have some good literature with a summary it would be greatly appreciated
00:40:57dgenr8:gmaxwell: no? if an SPV node connects to the entire network, how does withholding work?
00:41:22nwilcox:+1 for literature summaries for people catching up (if you can afford the effort/time).
00:42:14gmaxwell:CodeShark: you're forgiven, there is no great one stop shop to catch up in an instant; thats not what I'm barking about. But actually following the links I give in context would be good.
00:42:46gmaxwell:The key words in this space are "txo commitments" "stxo commitments" "utxo commitments" and "stateless mining".
00:42:53CodeShark:I usually do - I got interrupted by a phone call in this instance
00:43:07gmaxwell:dgenr8: withholding _what_?
00:43:31CodeShark:believe me, gmaxwell, there's nothing I'd rather be working on more than solving these exact issues...unfortunately I also have a bunch of other stuff to attend to :p
00:43:46CodeShark:so if you can help me catch up so I can usefully contribute it would be very much appreciated
00:44:35gmaxwell:Will do. (But I don't have the time to do so right this second, but indeed collecting pointers on this would be very useful; since it seems a few new people have shown up)
00:45:04dgenr8:gmaxwell: spentness
00:45:23CodeShark:I think a bunch of people are essentially trying to invent the same thing here...and there isn't the best amount of communication
00:46:04gmaxwell:CodeShark: there was a ton of communication this litterally 100s of posts, mailing lists entries, presentations, wiki articles, implementations.
00:46:32CodeShark:please point :)
00:46:38CodeShark:most of the stuff on these things is noise
00:46:42gmaxwell:As I mentioned at the top, enthusasim has waned in part because of those two technical issues.
00:46:43CodeShark:and I just can't go through all of it
00:46:44phantomcircuit:Emcy, IPC is roughly irrelevant for bitcoin validation speed
00:47:02phantomcircuit:Emcy, i would be entirely surprised if intel is holding back on raw processing speed
00:47:21Emcy:what about moar coars
00:47:23phantomcircuit:probably the only way to significantly increase validation speed would be ops specifically intended for it
00:47:49hulkhogan_:CodeShark, grep ;p
00:47:51gmaxwell:dgenr8: what if no one at all knows? CodeShark's premise was that there are not full nodes at all, not even miners. So what if you create an invalid block and then just plead ignornace?
00:48:15Emcy:thye prob would put EC hardware in the cpu eventually but very liekly not the curve bitcoin uses
00:48:21gmaxwell:"oh, you want to test this part of it.. oh nope, don't know anything about that part"
00:48:28Emcy:EC is getting some widespread use now i think
00:49:18Emcy:though something bothers me about a CPU having all this hardware for very specific functions
00:49:24gmaxwell:Emcy: intel IPC has actually increased tremendously in recent parts.. count the actual cores.
00:50:20gmaxwell:they now ship 18 core parts...
00:52:25dgenr8:gmaxwell: the interesting question is how to serve a couple billion SPV nodes. for that, the network should not need to be all full nodes. miners though, they need full node security afaict
00:52:50CodeShark:the only thing useful about the SPV concept is the hash tree - everything else I'd be totally happy scrapping :p
00:53:15gmaxwell:dgenr8: then you're talking about something different from codeshark.
00:53:32Emcy:18 cores damn
00:53:38Emcy:not consumer though
00:54:00gmaxwell:Emcy: right because consumers would have no clue what to do with it, unfortunately.
00:54:42gmaxwell:applications have not scaled well to multicore in general, and consumers are (as we've noticed) now been conditioned to believe that fully using a core for more than a fraction of a second means the software broken. :(
00:54:57Emcy:how do we make sure bitcoin wont end up only runnable on server market parts
00:55:46jgarzik:throttle adoption
00:56:01gmaxwell:Emcy: non-server market seems mostly to be optimizing for power usage and size right now.
00:56:06leakypat:jgarzik: +1
00:56:18c0rw1n:c0rw1n is now known as c0rw|zZz
00:56:40gmaxwell:disagree that throttle adoption is necessary or sufficient.
00:57:45gmaxwell:A single person could drive bitcoin to require a 18 core server cpu to keep up with absent limits, behold the power of "while true"; mean while the overwhleming majority of bitcoin denominated transactions happen without perturbing the blockchain (inside exchanges.).
00:58:08leakypat:To clarify, I mean adoption of blockchain written transactions
00:58:38gmaxwell:leakypat: yea, with that context I agree more!
00:59:55dgenr8:a UTXO set is a performance enhancement for a full node. since "everyone playing dumb" is a problem, it's probably just as well to skip the whole exercise of trying to prove unspentness...
01:00:00dgenr8:...and just directly distribute the exercise of proving spentness. which just requires a tx index, not a summary database.
01:00:29gmaxwell:gmaxwell has left #bitcoin-wizards
01:01:01dgenr8:...and allows individual nodes to have less than the full blockchain. ie they are capable of proving spentness for a certain part of the space
01:01:45CodeShark:what about an incentives model?
01:01:59CodeShark:why would they be interested in proving it?
01:02:02Emcy:it seems the only consumer hardware still going for raw power is gfx cards
01:02:12Emcy:so bitcoin has to better utilise those right
01:02:39dgenr8:CodeShark: well when resource requirements were lower, I hear there were 350K nodes
01:03:22Emcy:amd is pretty good at the gpu compute stuff right
01:04:00CodeShark:dgenr8: but generally, yes - incentives for proofs of spentness seems like a promising approach
01:05:17phantomcircuit:Emcy, even gpu's aren't designed for continuous load
01:05:40Emcy:well bitcoin isnt continuous
01:10:36dgenr8:CodeShark: suppose you had both. if you ask about a txout, and you get both an "unspent proof" and a "spent proof", guess which one wins ;)
01:11:07CodeShark:unspent proof seems a lot harder
01:14:48dgenr8:lack of an unspent proof is a lousy substitute for a spent proof. lack of a spent proof gets more interesting the larger the set who fail to provide it
01:16:29CodeShark:the only person who might perhaps have the proper incentives to provide unspent proofs are the spenders
01:16:35CodeShark:*is the spender
01:17:21CodeShark:for anyone else it
01:17:24CodeShark:it's too expensive
01:17:25CodeShark:not worth it
01:17:49CodeShark:however spent proofs are relatively cheap
01:20:19CodeShark:presumably if you got a spent and an unspent proof something broke :p
01:20:45dgenr8:CodeShark: unspent is fleeting. spent is forever (modulo reorgs)
01:22:17CodeShark:right - the unspent proof would only say "unspent within this particular chain"
01:22:32CodeShark:once the chain grows or changes, all bets are off
01:23:35CodeShark:so if a spender were to want to provide a proof of unspent, they'd have to continue updating the proof as the chain grew
01:24:48dgenr8:i'm not sure that's harder than keeping a txindex, but the logical properties of an sxto query seem superior
01:25:29CodeShark:well, it's also difficult because the witness must be compressed
01:25:39CodeShark:it must use some SNARG or SNARK of some sort
01:26:15CodeShark:whereas proof-of-spent can be easily accomplished with a single witness...the transaction that spends it
01:26:41dgenr8:yes. the other stuff is reading-list material for me ;)
01:27:07phantomcircuit:Emcy, IBD is continuous load
01:27:29bramc:There seems to be a law of commenting on bitcoin online: No matter what you say, somebody will accuse you of being a socialist
01:28:44CodeShark:hmm, not sure I've gotten that accusation from bitcoin specifically, bramc
01:28:48zooko:bramc: I'm just saving up my favorite Bram Cohen quote about Bitcoin for the right moment.
01:28:49Emcy:yes but not forever
01:29:14Emcy:cards are coming with stock waterloop snow anyway
01:31:19bramc:zooko, Huh?
01:31:32Emcy:>mfw "If there is hope, it lies with the gamers"
01:33:07Emcy:socialism is ok
01:33:29Emcy:it saved my nans life a few months back after a series of 4 minor strokes
01:34:55bramc:Emcy, My own economic outlook isn't so simplistic, I'm just confuzzled that people are accusing me of being a socialist for wanting to have prices set by market demand instead of subsidizing them to be zero
01:35:05bramc:supply and demand I mean
01:35:05zooko:bramc: I was just reminded of my favorite quote by Bram Cohen about how stupid Bitcoin is.
01:35:21zooko:bramc: I was teasingly claiming that I'm waiting for the worst possible moment to quote you on it.
01:35:25bramc:zooko, Using bitcoin as a 'store of value' continues to be ridiculous
01:35:27zooko:* zooko cackles gleefully
01:36:03Emcy:subsidising the fees?
01:36:04bramc:zooko, People are quoting me on it to attack me now, I don't care. I will happily remind people about the track record of bitcoin to date in terms of what's it's done to people who have bought into it
01:37:11Emcy:what you described seems the opposite of socialism
01:37:16bramc:Emcy, subsidizing the transaction costs I mean. I just finished driving for two hours and am not wording good
01:37:30bramc:Emcy, Hence my being confuzzled!
01:37:54Emcy:except if it counts as corporate socialism to the miners, which is the only kind of socialism people seem to be blind to heh
01:43:34zooko:bramc: wait, they're already quoting you from when you said something like "Bitcoin is just digital goldbuggery, which is even more idiotic than normal goldbuggery" ? Damn, there goes my chance to embarass you in public.
01:44:07bramc:zooko, It's golbuggism not goldbuggery, and yes they are
01:44:13amiller:rofl goldbuggery
01:44:47bramc:And I've always said, and continue to say, that getting excited about bitcoin because the value is going up is stupid, and saying it's a failure because the value is going down only marginally less so.
01:45:13Emcy:it seems it is you who has provided a quote for the ages zooko
02:56:19bramc:Aaand Gavin's proposed BIP uses nlocktime instead of block height for rollout dates. The whole subject is horribly depressing.
02:59:48Dazik:join #warez
02:59:53Dazik:Dazik has left #bitcoin-wizards
03:36:45CodeShark:bramc: you mean block_timestamp, not nlocktime (I hope) :)
03:37:56CodeShark:although block_timestamp is almost as bad :p
03:38:22jgarzik:indeed, it is block_timestamp
03:38:33jgarzik:median time has been suggested on the list
03:40:27bramc:If you want to be more precise about future time, you can have block_timestamp be used at some point to determine an amount of extra height before things kick in
03:40:43bramc:That would be a few minutes noisy but not have weird reorg problems
03:41:14CodeShark:I don't quite get the rationale for not using block_height either - I mean, I can understand that block height doesn't accurately predict a moment in time...but the rationale given is that the block height isn't available in the block header.
03:41:30CodeShark:two things: you still need to calculate block height to validate difficulty
03:41:49CodeShark:and...if we're going to hard-fork, might as well move the block height into the header rather than using that ridiculous coinbase hack
03:43:00CodeShark:and let's add an extra arbitrary 256 field to the header to allow for future extensions
03:43:18CodeShark:oh, right...that breaks miners
03:44:18CodeShark:also, if we can't hack around the coinbase data issue, we could have a way of requesting only the merkle tree for the coinbase transaction
03:47:59CodeShark:still is an utterly rube-goldbergish hack...but at least it can be easily encapsulated so we never have to ever look at it in our code again
03:49:40CodeShark:I can already imagine miners creating coinbase transactions with a zillion outputs just to piss us off :p
03:54:08CodeShark:so someone please explain to me the use case for "not needing to know the block height when doing validation" ?
04:07:40CodeShark:it is quite depressing, bramc...
04:10:24CodeShark:ah...I see Gavin's real reason in the ML
04:10:37CodeShark:it was easier for him to hack up something that only requires the timestamp :p
04:10:58CodeShark:grabbing the block height was apparently too much work
04:11:08phantomcircuit:CodeShark, the block height is not suitable because it can be +- the network hashrate growth from the expected day
04:11:17phantomcircuit:ie you calculate the expected height in 6 months
04:11:25phantomcircuit:price goes up and the hashrate goes up
04:11:29CodeShark:it's not particularly accurate - but it's very well-defined
04:11:40CodeShark:same thing goes for block reward halving
04:11:42phantomcircuit:and the difficulty retarget lag causes that block height to be reached x% earlier
04:12:12phantomcircuit:CodeShark, using the timestamp is reasonable, the way it's being used is comically absurd
04:15:22bramc:It could be set to the first block with a timestamp greater than X plus a certain amount of height
04:15:45phantomcircuit:bramc, and we have a winner!
04:15:49bramc:That would be fairly easy to hack up, and not have so many headaches.
04:16:48phantomcircuit:bramc, it's really silly either way though
04:17:07phantomcircuit:if you actually have a majority consensus you merely need to switch the nodes version of reality permanently
04:17:13phantomcircuit:all the non believers be damned
04:17:30CodeShark:the question is whether the reality really switched
04:18:09CodeShark:if you can
04:18:54bramc:Might as well move around some utxos while you're at it. Get rid of all the really old unspent mining rewards, maybe eliminate the ones known to have been seized my mt. gox.
04:19:24phantomcircuit:CodeShark, that's the great joke
04:19:32phantomcircuit:what miners vote for is meaningless
04:19:49phantomcircuit:using that as the switch over criteria shows a fundamental failure to understand the matter
04:20:15CodeShark:we lack any other objective metrics besides hashing power that can't easily be games
04:20:26justanotherusr:I think the idea is to assume everyone already switched over manually..
04:20:34CodeShark:that's the unfortunate truth
04:20:38justanotherusr:the miner vote is just to get miners to agree with each other before they risk wasting moneey
04:20:43phantomcircuit:CodeShark, the only metric that matters is user adoption
04:20:50phantomcircuit:CodeShark, which cannot be reliably measured
04:20:56phantomcircuit:there is no safe way to hard fork
04:20:56CodeShark:phantomcircuit: precisely
04:21:21bramc:phantomcircuit, Miner buy-in for compatible extensions works fine
04:22:07phantomcircuit:bramc, yes for a soft fork it's fine
04:22:32phantomcircuit:bramc, since it's essentially the miners colluding to censor transactions which violate the new rules
04:23:07bramc:phantomcircuit, That's part of it, but it's also them agreeing that they'll all understand the extensions and deciding when to start issuing them
04:25:14bramc:technically a soft fork where the miners all agreed to censor certain utxos and make them unspendable would also work
04:25:20bramc:Although that... would be a bad idea
04:25:34phantomcircuit:bramc, shh dont tell them
04:25:34CodeShark:for a hardfork, if the vast majority of users are still on the old fork it just means miners waste their hashing power securing nothing
04:25:46phantomcircuit:CodeShark, correct
04:26:36bramc:CodeShark, The result of a hardfork doesn't tend towards one side or the other winning, it tends to mining hashpower going to the two sides proportionately to what their mining rewards are trading at on exchanges
04:27:16phantomcircuit:bramc, thus it being almost certainly a disaster unless you can achieve near universal consensus first
04:27:21bramc:Just like they're two completely unrelated altcoins, which basically they are
04:27:27CodeShark:bramc: yes, but I would think that's highly correlated with the proportion of actual users
04:27:34CodeShark:the price at exchanges, that is
04:27:37bramc:phantomcircuit, Yes exactly
04:27:53bramc:CodeShark, There aren't very many exchanges, they're the ones who have the greater say
04:28:18CodeShark:the exchanges will go with what they think is in their own economic self interest
04:28:34CodeShark:higher volume of trading would generally be seen as better for their bottom line
04:28:47CodeShark:however...I don't think this forked ledger is a stable situation :p
04:28:56CodeShark:I think the ultimate outcome is both lose
04:29:14CodeShark:either one wins overwhelmingly...or both lose
04:31:01CodeShark:and the longer the fork carries on without an overwhelming victor, the more likely they'll both lose
04:31:29phantomcircuit:CodeShark, except a bunch of them are planning to do centralized offchain transactions also
04:31:36phantomcircuit:so whose to say?
04:31:50CodeShark:I don't quite follow
04:32:14CodeShark:was that meant sarcastically? :)
04:32:25phantomcircuit:CodeShark, lots of them have -- as part of their business model -- operating trusted off chain transaction systems
04:32:38phantomcircuit:which mucks up the question about rational actors
04:32:58CodeShark:sure - that's fine for internal trades. the issue only applies to on-chain deposits and withdrawals
04:33:13CodeShark:but if it's easier for someone to get into and out of positions, they're more likely to want to trade
04:33:21jgarzik:a longstanding forked ledger is incredibly unlikely
04:33:36jgarzik:c.f. "50 BTC or die" folks :)
04:34:28phantomcircuit:jgarzik, at a 75% switch over it's practically guaranteed to happen for some period of time exceeding a week
04:34:46phantomcircuit:there's a fairly substantial amount of mining infrastructure that's entirely on autopilot
04:34:51phantomcircuit:nobody would even notice
04:36:12phantomcircuit:hell even if 1% stays
04:36:21phantomcircuit:that's 1 block/day
04:36:42jgarzik:c.f. P2SH :)
04:37:12CodeShark:and not only do both forks lose in this scenario - any potential future cryptocoin loses as well
04:37:14jgarzik:That's perfectly OK... miners are allowed to piss money away
04:37:17CodeShark:confidence in the whole idea suffers
04:37:58CodeShark:if a few miners are the only ones left behind I don't think anyone will shed a tear
04:38:40jgarzik:a few miners + MP
04:39:05jgarzik:hmmmmm. Now I like forking even more ;p
04:42:01phantomcircuit:jgarzik, soft fork experience is meaningless
04:42:17CodeShark:the march 11th fork is actually a great example of user adoption did defeat hashing power adoption...but only by human intervention
04:42:46CodeShark:and willing cooperation from said miners
04:43:51moa:bramc: " it tends to mining hashpower going to the two sides proportionately to what their mining rewards are trading at on exchanges"
04:44:03moa:this is exactly what happened in early days of namecoin
04:44:18moa:and viiolent oscillations bettwen value and hashpower shifts
04:45:13CodeShark:are you talking about miners who mined both bitcoin and namecoin? not sure how this example fits in
04:46:10phantomcircuit:moa, and now namecoin is worthless
04:46:16phantomcircuit:so historical precedent is not good
04:46:22CodeShark:two merge-mined distinct ledgers is a whole lot better than one fork-mined ledger :p
04:46:25phantomcircuit:CodeShark, namecoin forked a bunch of times
04:46:39phantomcircuit:CodeShark, which resulted in diffent exchanges trading different forks
04:47:30CodeShark:that must have been a few weeks before I came into this space
04:49:14moa:CodeShark: this was before merge mining
04:50:23CodeShark:merged mining is perhaps namecoin's most significant legacy. unfortunately, namecoin's merged mining implementation is just about the worst possible way it can be done :p
04:50:30moa:depending on points in the cycle it was more profitable to mine namecoin or bitcoin, with same hashpower
04:52:01CodeShark:I missed out on all that mining fun - I never got to do any bitcoin mining
04:52:08moa:so hashpower was jumping back and forward between ... and exchange rate and difficulty oscillating wildly
04:52:38moa:which was partly the incentive to implement merge mining
04:52:52CodeShark:when I came into this space, GPU mining probably at its peak
04:52:54moa:hurredly admittably ... but hey
04:53:05moa:it works right
04:53:20CodeShark:it actually explains a bunch, moa
04:55:48moa:lesson being you do NOT want the sha(256) hash power to split and start competing against each other
04:55:58moa:it's like a dragon eating it's own tail
04:58:01CodeShark:switching between different coins is not nearly as bad as switching between different ledgers of the same coin :p
04:59:09CodeShark:during the march 11th fork, the recommended policy for merchants and exchanges was to stop all transactions until the fork is resolved
05:01:37phantomcircuit:CodeShark, which is the only rational behavior during a hard fork
05:01:51phantomcircuit:which means there's no rational way to decide which side to goto as a user/miner
05:01:52phantomcircuit:and tada
05:01:55phantomcircuit:systemic collaprse
05:03:02CodeShark:there could be a rational reason to back one fork rather than the other if you strongly believe it will ultimately prevail
05:03:27CodeShark:as a miner, that is
05:03:36CodeShark:as a user, I think the rational thing to do is wait :)
05:03:52CodeShark:barring criminal intent, of course
05:04:56CodeShark:as a miner, the only real downside to backing one side and losing, really, is cost of electricity and a few missed blocks
05:06:08CodeShark:so one could even argue that it's more rational to mine on either fork (even without having any idea which will prevail) than not mining at all
05:08:34phantomcircuit:CodeShark, the rational decision for miners is to turn off their equipment
05:08:36phantomcircuit:fun right?
05:09:45CodeShark:so the conclusion is 75% of hashing power is an extremely poor metric to use here :p
05:10:14CodeShark:but any other metric likely requires some level of human intervention
05:11:38jgarzik:miners follow users after a hard fork
05:12:06jgarzik:except the ones on autopilot that get ignored
05:13:03CodeShark:jgarzik: that only happened on march 11th because of pressure applied on a few big pool operators
05:13:17CodeShark:had that not been done, they might not have noticed for a while :)
05:14:14CodeShark:autopilot definitely favored the minority fork in that situation
05:15:38phantomcircuit:CodeShark, i believe it is actually the worst possible
05:15:49phantomcircuit:it's high enough that the uninformed will believe it's meaningful
05:15:55phantomcircuit:will also being a useless metric
05:16:14phantomcircuit:AND basically gurantees a true network split
05:17:04CodeShark:can't we conduct a poll somehow and decide based on that? (yes, I know polls can be sybilled...but let's not be such geeks for a moment and think practically)
05:17:12phantomcircuit:jgarzik, you really have no way of knowing that, the march fork had an entirely clear course of action once it became clear what was happening
05:17:25phantomcircuit:CodeShark, no because sybil
05:17:39phantomcircuit:CodeShark, the polls people have put up are actually being attacked in this manner
05:18:10CodeShark:we don't just place an online questionaire up for anonymous responses - we'd have to have the process monitored by actual people...and yes, it can get political
05:21:03jgarzik:phantomcircuit, outside of hyperventilating wizards, the near unanimous majority I hear wants the network to continue to scale. If the network hard forks to 2MB in 6 months, the network will not fall over. Hard forks include big risks, but it is being blown way out of proportion -- on here & by Mike Hearn both. Mountains are being made out of molehills.
05:21:04jgarzik:As a result, when the block limit is higher and there is truly a danger to decentralization, the wizards will get ignored because they squawked too loudly, too early.
05:22:09jgarzik:On the conservative side there is a noted lack of proposals, which leads to difficulty in taking that side seriously. At least Adam has his thinking cap on.
05:22:25CodeShark:I don't think we're only talking about this block size issue, jgarzik - I think this applies generally to having any sort of hard fork process
05:22:58CodeShark:it's not about people wanting the network to scale, ultimately...that's a red herring
05:23:03jgarzik:CodeShark, perhaps - the context is unavoidable. _This_ hard fork is not going to end the world.
05:23:19CodeShark:it's about one group of developers wanting control over the network vs. another
05:23:19jgarzik:and painting is thusly is disengenuous
05:23:26CodeShark:that's really what it will come down to
05:23:33CodeShark:let's not be naive here
05:23:55phantomcircuit:jgarzik, i've heard things from "what the fuck" to "i want bigger blocks without a hard fork"
05:24:09jgarzik:leave the cloister
05:24:14jgarzik:and get out into the world
05:24:18phantomcircuit:jgarzik, i've yet to speak to anybody who both wanted larger blocks and understood that they required a hard fork
05:24:46phantomcircuit:"your sample size is small and you should feel bad!"
05:26:01jgarzik:I've literally been flying all over the world taking samples ;p
05:26:52CodeShark:also, the block size increase push has a lot more to do with avoiding fee pressures than scaling
05:27:06CodeShark:let's also not be naive about that one :)
05:28:00leakypat:I couldn't find anyone at my meet up who didn't want to raise the block limit, some wanted it removed with no limit
05:28:07phantomcircuit:jgarzik, i suspect your samples are strongly biased
05:28:09jgarzik:CodeShark, yes - and they go hand in hand. To review, the _years long_ policy has been avoiding fee pressure. That is the market expectation. It is a major - and conscious - market shift to change that.
05:28:21jgarzik:That is not a judgement of good/bad, just a statement of fact.
05:28:22phantomcircuit:either way
05:28:29phantomcircuit:nobody will be happen when shit explodes
05:28:30jgarzik:Maybe you want fee pressure, maybe you don't.
05:28:37jgarzik:either way, it is a delta
05:28:47phantomcircuit:note: i personally will divest entirely before the fork date...
05:28:49jgarzik:to argue to _begin_ fee pressure is a market change
05:28:56leakypat:Most people I talk to aren't aware that fees are subsidized
05:28:56jgarzik:and an economic policy change
05:29:11moa:just a little fee pressure then?
05:29:24bramc:jgarzik, 'Leave things be' as a proposal is hard to take seriously?
05:29:29Luke-Jr:jgarzik: we have had fee pressure before
05:30:21jgarzik:bramc, No semantics will get around the fact that it is an change in economic policy to introduce consistent fee pressure.
05:30:23jgarzik:Luke-Jr, not consistently, http://hashingit.com/analysis/39-the-myth-of-the-megabyte-bitcoin-block
05:30:29bramc:jgarzik, arguably the market expectation has been that the rules which everyone has bought into by participating in the system will continue to be followed
05:30:31Luke-Jr:jgarzik: also, it's becoming more and more apparent that we will have fee pressure no matter what very soon: "stress test" spammers are going to be filling blocks as much as they can.
05:31:01Luke-Jr:jgarzik: I'm not sure what the link is meant to suggest.
05:31:11bramc:jgarzik, Is a balloon at the end of a mortgage schedule a sudden change in policy?
05:31:13jgarzik:bramc, If you want to argue for economic change, that's fine
05:31:21jgarzik:bramc, but be honest and admit that it is different from current policy
05:31:23Luke-Jr:jgarzik: we haven't consistently had no-fee-pressure either
05:32:00jgarzik:Luke-Jr, on average yes we have. each time the situation arises, Hearn lobbies miners to increase their block size soft limit, and/or Bitcoin Core increases default miner soft limit.
05:32:03jgarzik:the history is quite clear.
05:32:07bramc:jgarzik, I am arguing for economic change, but it's also coming right on schedule, as everyone paying attention has been aware of for years
05:32:13jgarzik:the numbers are quite clear.
05:32:57jgarzik:bursts of fee pressure are inevitable and natural. the long run average is low pressure / subsidized however.
05:33:30moa:full nodes could be incentivised to pay miners for smaller blocks I suppose
05:33:33dgenr8:people keep saying that word "subsidized". i dont think it means what they think it means
05:33:58gwillen:dgenr8: transactions are secured by miners, but miners are paid by inflation right now, not by transactions.
05:34:03gwillen:So the security of transactions is subsidized.
05:34:38bramc:gwillen, Also subsidized by full nodes
05:35:23dgenr8:that makes all kinds of assumptions about the future economic value of the block reward
05:35:36moa:hearn is a good lobbyist, it has to be said ... the bitcoin foundation could have used him
05:35:41bramc:dgenr8, Some members of the ecosystem are taking on significant costs to avoid those costs hitting others. 'subsidized' is a close enough word.
05:36:19dgenr8:the exhg rate since the last halving pays for the next 3 halvings.
05:36:24bramc:the bitcoin foundation seems to be a collection of the sketchiest people you'll ever meet
05:36:30dgenr8:so lets not get too up in arms about it
05:36:48jgarzik:moa: hehehehe I dunno I think hearn isn't so great as a lobbyist, he's angering everyone with this contentious fork stuff
05:36:50bramc:dgenr8, Nobody is suggesting getting rid of the halving!
05:36:57jgarzik:maybe he is a lobbyist for keeping it at 1MB :)
05:37:03dgenr8:read what i wrote again
05:37:23dgenr8:exchange rate is up 10x since 2012 halving
05:37:51dgenr8:gavin is right that this is the number miners should care about
05:38:15CodeShark:there seem to be two main interests behind the fee pressure avoidance policy: 1) bitcoin maximalists who think if we only were able to convince more people to use bitcoin the network would magically be able to suddenly support hundreds of millions of new users. 2) lazy developers who don't want to try to figure out a good solution to fee bidding
05:38:44bramc:I like fee bidding as a subject. I wish we were discussing that.
05:38:44jgarzik:CodeShark, now who's being naive ;p
05:39:20CodeShark:I said "seem" :p
05:39:37gwillen:bramc: I was interested to see that fee estimation is more widely deployed than I realized
05:39:50bramc:gwillen, Where? How?
05:39:56gwillen:that plus safe-RBF (or CPFP) sort of gives you fee bidding
05:40:04gwillen:bramc: let me see if I can find where I read that
05:40:32bramc:safe-rbf is something but it sucks. It directly reduces the potential throughput of the system as a whole, driving up prices even more
05:40:45jgarzik:CodeShark, software & market are not prepared for consistent fee pressure. c.f. economic policy change.
05:41:03jgarzik:"Let's change economic policies without preparing the ecosystem!" is not a responsible position.
05:41:03gwillen:bramc: https://gist.github.com/petertodd/8e87c782bdf342ef18fb
05:41:10jgarzik:sorry bramc
05:41:14gwillen:bramc: "As of v0.10.0 Bitcoin Core estimates fees for you based on the supply and demand observed on the network. By default it tries to pay a sufficiently large fee to get into the next block, so as demand increases it pays higher fees to compensate."
05:41:22gwillen:jgarzik: it's clearly a chicken-and-egg problem
05:41:31gwillen:jgarzik: nobody puts in engineering effort to fix problems that don't exist yet
05:41:35moa:maybe we could have full nodes into a pool for assurance contracts that pays miners automatically when they produce smaller blocks ... and run it on lighthouse?
05:41:45bramc:gwillen, *cringe* there are some control theory problems with that
05:41:56gwillen:bramc: oh sure, but it's a first attempt at least
05:42:01gwillen:it's better than _not_ doing it
05:42:19gwillen:it at least gives a better starting point for RBF than just guessing blindly
05:42:40gwillen:you're right that a real auction would be better economically, but I'm not even sure how you'd begin to do that
05:42:53CodeShark:it clearly is chicken-and-egg...we need to push the envelope...and can't expect things to always be economically smooth
05:43:10bramc:jgarzik, I very much want to discuss economic preparation, but the conversation is all getting derailed by talk about a hard fork and rbf not being supported and things which are generally sabotaging forward progress :-P
05:43:29CodeShark:we need to be faced with the problem to motivate us to find a solution - and it's arguably easier to solve it now rather than later...while blocks are still mostly subsidized by reward
05:43:41bramc:I've also talked with the lightning network guys and am trying to help with that as I can
05:44:10gwillen:jgarzik: do you follow rusty's blog at all? http://rusty.ozlabs.org/
05:44:38gwillen:jgarzik: he has good posts recently about (1) what gets crowded out if blocks fill, and (2) what happens to latency if blocks grow. (Both simulations based on guesses, obviously.)
05:45:50bramc:gwillen, Miner algorithms for what to accept are *mostly* straightforward, it's the clients deciding on fees which is hard.
05:46:25gwillen:bramc: agree on that, yes
05:46:55bramc:Come to think of it, a lot of people not cringing about the suggested fee estimation techniques is because they don't have much experience with control theory. I have, uh, a lot more experience than I ever cared to with control theory.
05:48:12gwillen:When you talk about control theory, is the issue that you can't reason adequately about what fees are required based on the data you have? Or that a bunch of clients trying to naively do so at the same time will interact badly with each other?
05:48:19gwillen:I.e. fees will oscillate wildly or something?
05:48:51moa:bramc ... multivariate problem requiring LQR, perhaps with LQE thrown in?
05:49:07gwillen:Because I could imagine both of these being an issue although it seems like both of them _could_ be fine in practice.
05:49:36bramc:gwillen, oscillation is one example of a problem. The issue is that if you have some algorithm for setting prices, and everybody's using it, then prices will be set based on what the previous prices were... which were also set by algorithm
05:50:03gwillen:That's not inherently bad... but I agree that it could be
05:50:45bramc:moa, unfortunately each participant only does things very sporadically, so those kinds of information-heavy approaches don't even apply
05:51:30bramc:I believe that no matter what your general approach to price setting is, you always want to make an attempt to lowball early and only raise the price if it fails
05:51:46bramc:Otherwise prices have a tendency to get high and get stuck there for no particular reason
05:53:19gwillen:well, this particular problem doesn't seem too likely to get stuck there
05:53:42gwillen:depends on how the estimator works I guess, but the marginal price to get into a block is just above the minimum price of any transaction in that block
05:53:59gwillen:and if a block is literally full of transactions above a given price, it's hard to imagine the fair price being below that
05:54:23CodeShark:gwillen, miners could fake them
05:54:29gwillen:mmmmmm, true
05:54:43bramc:gwillen, Yes, but if everybody assumes that they need to offer that same amount to get into the next block then it may happen, even if it's untrue
05:54:59gwillen:bramc: if everybody is _willing_ to offer that amount to get into the next block, it can't be untrue
05:55:22CodeShark:unfortunately, without listening to the network gossip it seems very difficult to accurately assess fees...
05:55:24gwillen:the only way the true price could be lower is if people are letting the fee estimator pay money it's not actually worth to them, because they're asleep at the switch
05:55:32bramc:gwillen, maybe it looks like it's true because the miners fill up the extra space with self-payments
05:55:36CodeShark:just looking at past blocks is not enough
05:55:40gwillen:yes, miner cheating could do it
05:56:06gwillen:although you'd need a minimum hashrate fraction or a cartel, to make it profitable
05:56:11gwillen:I don't know what that fraction would be
05:56:23gwillen:otherwise you'd be bleeding fake fees to other miners
05:56:45gwillen:(oh, that's a lie, nevermind.... of course you only mine your fake fees into your own blocks)
05:56:51bramc:Also it could be that the fees on the next block will be dramatically lower than the last one, so maybe offering the same amount is a bad idea because you're paying too much.
05:57:07CodeShark:in principle, if miners are checked by sufficient parties, recent blocks would tend to be a fairly good guide...but I think bramc's claim of the "stuck" phenomenon is clients that aren't smart enough to figure out when miners start cheating
05:57:08gwillen:right, although that can be counteracted by looking at multiple past blocks, and looking at the mempool
05:57:28gwillen:yeah, miner cheating is the only serious issue I'm seeing here, but it's pretty serious
05:57:50bramc:Looking at the mempool can help, although it's fraught with danger
05:58:16bramc:And it requires new logic to get that info, and whoever you're talking to might lie about it if you're on spf...
05:58:24gwillen:the mempool can be full of lies but if you're smart you can't be fooled for long
05:58:28phantomcircuit:i doubt very much that there is a general solution to the problem that doesn't as it's first step assume that miners are ordering transactions on a feerate priority basis
05:58:41gwillen:phantomcircuit: that's fine, they largely do...
05:58:49bramc:phantomcircuit, Yes we all agree on that
05:58:56CodeShark:and they are more likely to do so even more if we have a real fee market
05:59:27CodeShark:assuming perfect transparency, it's the economically rational strategy
06:00:35bramc:As a general rule, there will always be some tradeoff between time for transactions to complete and fee paid
06:00:48CodeShark:of course, off-chain mining contracts might screw up even that assumption, though :)
06:00:53bramc:At the extremes there will undoubtedly be daily and weekly cycles
06:02:03bramc:So if there's a situation where, for example, there's fee pressure during peak times but none at all at other times, which will most likely happen for a while, you have to make a choice about how much you care about speed vs. fee
06:02:22phantomcircuit:gwillen, in that case it's as simple as sorting the mempool using some f(feerate, age) priority basis
06:02:40phantomcircuit:(age being a proxy for whatever other policies miners might have in place)
06:02:58CodeShark:the communication complexity (especially for thin clients) is significant for mempool stuff
06:03:43phantomcircuit:CodeShark, there's no way around that
06:03:49phantomcircuit:thin clients need to have a mempool
06:04:18bramc:CodeShark, the easy thing to do is for thin clients to simply ask the going rate. This has an obvious and horrific attack...
06:04:37phantomcircuit:although maybe it's not unreasonable to outsource fee estimation
06:04:42phantomcircuit:(on a strict fee rate basis)
06:05:06zooko:second-price auction
06:05:18CodeShark:fee market makers :p
06:05:33bramc:phantomcircuit, There aren't many full nodes out there - it would be trivial for miners to run them just to lie to spf clients about how big the fees are
06:05:38moa:thin clients having a mempool (or sample of) is an intriguing idea
06:06:02phantomcircuit:bramc, i was actually thinking an authenticated trusted source
06:06:06bramc:This is all making my 'conservative' approach sound like a good idea
06:06:10phantomcircuit:the alternative is for thin clients to receive all transactions
06:06:27bramc:zooko, Yes we've all agreed to assume that the miners follow the obvious algorithm
06:07:12bramc:zooko, In principle you could require that the fees charged to all transactions be the same but, well, that would allow miner stuffing, and Bitcoin Doesn't Work That Way
06:07:52zooko:I didn't mean that -- I meant some vague generalization/extension that maintains the Vickrey property of true preference revelation.
06:08:25zooko:Maybe when I wake up from this sleep I'll have an idea how that could actually work. Probably not. Goodnight!
06:08:47zooko:(P.S. there's a thing called Generalized Second-Price Auction, used for keyword ad sales...)
06:09:35bramc:My conservative idea, for those who haven't seen me talk about it already, is that clients don't do any of this mempool or historical lookup stuff at all. They start at a nominal fee, maybe the minimum allowed, add a random amount between 0 and 10%, and for each block where their transaction isn't accepted add another 10% compounding
06:11:01phantomcircuit:bramc, that's probably a good general solution if replace by fee is available
06:11:08phantomcircuit:(and no not the ffs version...)
06:11:25bramc:This has the advantage of being completely trustless and easy to implement. It has the disadvantages that it requires real rbf, and makes transactions take longer
06:12:57bramc:The extreme amount of noise in the capacity rate due the the stochastic nature of mining blocks doesn't help anything
06:13:32bramc:For that reason alone being willing to wait a bit longer will get you lower fees
06:14:13CodeShark:perhaps that issue will be remedied with some of the proposals out there for reducing block time variance
06:14:43bramc:CodeShark, reducing block time variance comes with its own problems, like increasing the amount of orphan blocks
06:14:58CodeShark:not if you also use them for PoW
06:16:12bramc:It's probably best to assume that there will be fee/time tradeoffs as part of the new normal
06:16:17CodeShark:but obviously such a change is unlikely in the very near term on the Bitcoin network...
06:16:49leakypat:If you have a transaction in the memory pool with a high fee and then you get a number of blocks in quick succession that clear down the pool but miss your transaction , there is no way to lower your fee
06:17:08bramc:You can also do a hybrid approach, where you figure out an expected fee based on recent history or mempools or something, then lowball it and slowly raise
06:17:17CodeShark:I think the fee/time tradeoff is inherent - the only question is how to optimize this for particular use cases and automate it as much as possible
06:17:18leakypat:As miners would always mine the one with the highest fee
06:17:35bramc:leakypat, Yes yet another reason to start low
06:18:00CodeShark:right, the "stuck" phenomenon doesn't even require cheating miners - just a bunch of dumb clients :)
06:18:21bramc:If the fees go down right after your transaction goes through then, well, you should have been wiling to raise your fee slower.
06:20:06bramc:mempool has the other problem that it could potentially get fairly washed out, so you might be looking at only a fraction of what will eventually go into the block
06:21:28CodeShark:being able to listen to the network gossip will almost certainly help you estimate more accurately more quickly
06:22:11CodeShark:so those devices that have access to this should certainly make use of that information if they can
06:23:07CodeShark:there's also the communication complexity/decentralization tradeoff
06:23:32bramc:The day/night cycle should be taken seriously. That's likely to be dominant at first
06:25:36CodeShark:I sort of like the model of fee market makers - where you can pay a fee to someone who guarantees inclusion within a certain number of blocks or you get a refund
06:26:14CodeShark:and they then work their complex algorithms using reliable broadband connections
06:26:40CodeShark:as long as there are enough of these out there
06:26:46CodeShark:if there's only one or two, it's dangerous :)
06:27:38CodeShark:it's a decentralized fee estimation model that still supports thin clients
06:28:45leakypat:Wallets could probably provide that as a service
06:29:10bramc:Another thing a wallet can do is use the last fee it successfully used as a starting point for the next one
06:30:40leakypat:From its pool of users (if a centralized wallet)
06:31:45leakypat:Possibly could abstract fees completely , and charge per month or something for confirmation within n blocks guaranteed
06:32:49antanst:antanst has left #bitcoin-wizards
06:33:23leakypat:* leakypat ponders if that could be done trustless and seamlessly
06:35:29leakypat:The wallet service that broadcasts the transaction can have its own pool of inputs to use for attaching inputs for fee escalation
06:36:08phantomcircuit: perhaps that issue will be remedied with some of the proposals out there for reducing block time variance
06:36:15phantomcircuit:im not aware of any such proposals
06:36:23phantomcircuit:indeed i dont believe they're possible...
06:36:29CodeShark:phantomcircuit: mostly the GHOST-like ideas
06:37:19CodeShark:amiller had such a proposal, too
06:38:24phantomcircuit:CodeShark, iirc amiller's proposal was to prevent pooling right?
06:38:32CodeShark:that's another potential benefit
06:38:33phantomcircuit:the problem there is that it... prevents pooling!
06:38:52CodeShark:well, ultimately what you want is for pooling to be incorporated into the protocol itself
06:39:08CodeShark:you can mine at lower difficulty and still have your work count towards the most difficult chain
06:39:33phantomcircuit:* phantomcircuit looks around for petertodd to say treechains
06:40:43amiller:phantomcircuit, no the point wasn't to prevent pooling
06:41:04amiller:phantomcircuit, the point of that proposal (which predates ghost by a long time!) was to remove the block time magic setting
06:44:28phantomcircuit:amiller, can you elaborate?
06:45:05amiller:im not sure this is the best motivation for it now, so i can tell you what i thought the point was
06:45:56amiller:the point was just to get rid of the hardcoded 10 minute block time and replace it with an automatically adjusting mechanism
06:46:18phantomcircuit:amiller, based on...?
06:46:30amiller:it should increase the target difficulty when the network is losing too much stale work
06:46:44amiller:it should lower the target difficulty as much as possible until it starts losing stale work
06:47:07amiller:the simple idea is to fix a desired 'target' for stale blocks
06:47:21amiller:the last time i asked i think it was 2%, maybe you should target 20% or 50%
06:48:09phantomcircuit:amiller, that's neat but probably fails to account for miners estimating costs
06:48:21amiller:phantomcircuit, what do you mean
06:48:38amiller:one thing i didn't pursue but now makes sense given ghost
06:48:53amiller:is whether you should give some rewards to the stale blocks or punish them
06:49:20amiller:ghost gives them some discount factor, maybe they should be given full credit i dont know
06:50:33phantomcircuit:amiller, it's currently fairly easy for a miner to calculate their revenues for the month
06:50:50phantomcircuit:it seems like it would be difficult to implement something like this without changing that
06:51:05amiller:ok this shouldn't make that any worse (i think it would be a useful idea to *prevent* large miners from doing that anyway)
06:51:26amiller:(or to put it another way, i think it would be a good idea to tempt large miners to gamble on an even bigger reward but more uncertainty)
06:51:49phantomcircuit:yeah that's probably true
06:54:39amiller:but i dont have any good idea for how to set the target time
06:54:44amiller:er the target 'stale' blocks
06:54:57amiller:i dont know how to attach it to any market mechanism
06:55:37amiller:maybe there's a way to solve the lbocksize problem this way
06:55:57amiller:sdlerner wanted to decrease the block time rather than tinker with blocksize
06:56:33amiller:so i dunno maybe there's some way to let the average txes in a block or txfees determine a target stale blocks rate, and then use that to set difficulty
06:57:35CodeShark:we cannot assume that the tx fees are not the miners themselves if the model gives them any incentive to do so
06:58:58CodeShark:also, variable block times has another potential danger...people need to understand that the security level is not proportional to the number of confirmations
06:59:32amiller:thats true but i wouldn't let such a concern about presentation to users narrow my options at this point
06:59:43CodeShark:the security level is given by how hard it is for someone to reverse the transaction - so this is what must be quantified
07:02:02CodeShark:of course, it's hard to measure this very accurately
07:07:43CodeShark:it largely depends on the hash power distribution
07:08:12CodeShark:but I guess we can make some simplifying assumptions
07:08:19CodeShark:like assuming that nobody controls more than x%
07:09:55CodeShark:actually, encouraging small miners to publish their work more frequently (at lower difficulty) would perhaps give us a better sense of hash power distribution
07:10:40CodeShark:and discouraging withholding attacks obviously would as well
08:05:13orwell.freenode.net:topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
08:05:13orwell.freenode.net:Users on #bitcoin-wizards: andy-logbot dEBRUYNE pollux-bts MrTratta cdecker MoALTz_ jtimon darwin_ Mably priidu p15 orperelman spinza paveljanik jmcn GAit rht__ sundance30203_ mjerr Jaamg TheSeven copumpkin PRab nessence Dr-G2 melvster zooko jgarzik Tebbo rustyn PaulCapestany adam3us1 forrestv UllrSkis espes gnusha Cory Luke-Jr tucenaber sparetire_ p15x hashtag_ Emcy goregrind bliljerk101 elastoma Burrito fragzle ThinThread austinhill Madars hashtagg_ chmod755 _biO_
08:05:13orwell.freenode.net:Users on #bitcoin-wizards: jonasschnelli huseby null_radix catcow adams__ Taek tromp afk11 rasengan cfields btcdrak bedeho Tiraspol phantomcircuit kanzure AaronvanW mkarrer shesek sadoshi SwedFTP dc17523be3 TD-Linux GreenIsMyPepper hulkhogan_ richardus sl01 dgenr8 badmofo michagogo LeMiner otoburb isis ttttemp kinlo EasyAt nephyrin` Krellan nsh vonzipper larraboj_ mariorz coryfields_ sneak optimator go1111111 stevenroose BlueMatt eric afdudley0 warptangent ryan-c luny`
08:05:14orwell.freenode.net:Users on #bitcoin-wizards: crescendo prosodyContext_ mappum wiz nickler_ jbenet kyuupichan livegnik pigeons davout Fistful_of_Coins mikolalysenko fluffypony Starduster amiller akstunt600 BananaLotus Logicwax scoria c0rw|zZz andytoshi superobserver STRML jouke Alanius_ jaromil fenn gribble nanotube Guest68586 mengine grandmaster indolering BrainOverfl0w [ace] catlasshrugged throughnothing sparetire poggy guruvan waxwing dignork yoleaux gavinand1esen Anduck veox helo
08:05:14orwell.freenode.net:Users on #bitcoin-wizards: grubles heath roasbeef ggreer Iriez starsoccer binaryatrocity_ Muis Xzibit17 comboy petertodd wumpus a5m0_ cryptowe- jessepollak dansmith_ theymos [d__d] AlexStraunoff sturles HM midnightmagic merlincorey warren azariah_ jrayhawk qawap Meeh_ maaku gielbier mm_1 epscy face tromp_ SubCreative akrmn triazo lmatteis koshii _whitelogger dasource ebfull OneFixt jcorgan sundance bsm117532 CodeShark K1773R gwillen lclc wizkid057 ajweiss so @ChanServ
08:05:14orwell.freenode.net:Users on #bitcoin-wizards: AdrianG Graet morcos xabbix Eliel leakypat brand0 harrigan harrow berndj weex thrasher` smooth iddo Apocalyptic yorick platinuum kumavis runeks artifexd CryptoGoon yrashk s1w
08:59:01maaku:today I learned that sha224(sha224(a) || sha224(b)) takes two rounds to calculate the outer hash
08:59:08maaku:epic fail NIST
09:03:01phantomcircuit:maaku, dat padding
09:25:35maaku:FIPS-202 has three bits of meaningless padding, because no reasons at all
09:25:50maaku:but at least with the sha-3 block sizes it doesn't result in the same inconvenience
11:16:40Tiraspol_:Tiraspol_ is now known as Tiraspol
11:37:48MRL-Relay:[fluffypony] testing
11:37:53fluffypony:yay, works again
13:14:21stonecoldpat:what is mrl?
13:14:37fluffypony:stonecoldpat: Monero Research Lab, https://lab.getmonero.org
13:14:50stonecoldpat:ahh cool! :)
13:25:52c0rw|zZz:c0rw|zZz is now known as c0rw1n
13:30:26leakypat:petertodd: is there any hashing power on Testnet with the full RBF patch?
13:34:24ruby32:ruby32 has left #bitcoin-wizards
14:01:17gavinand1esen:Has anybody done any simulation or modeling or research into the interaction between the random nature of block-finding and users' time preference for having their transactions confirm sooner rather than later?
14:07:51instagibbs:the more you frown at your wallet, the more it bumps the fee
14:08:29dgenr8:gavinand1sen: as in, if poisson parameters were different, would users pay more or less for quick confirmation? i guess you'd have to look at altcoins?
14:10:05instagibbs:really depends on the use case, clearly.
14:10:32instagibbs:Wonder if Bitpay has done a user study for their flow
14:14:54gavinand1esen:dgenr8: No, I'm thinking of the interaction between the random nature of finding blocks, either a maximum block or memory pool size, and people's willingness to pay more to see their transactions confirm sooner rather than later.
14:15:19dgenr8:historically, highest fees - as measured in BTC - coincide with highest public interest and price runups. https://blockchain.info/charts/transaction-fees?timespan=all
14:16:00ThinThread:obv we should drastically raise txn fees then
14:16:29gavinand1esen:fees will automatically raise the next time we get a price spike
14:17:03gavinand1esen:(fees as measured in dollars or euros)
14:17:17dgenr8:gavinand1eses: yes but it's non-obvious that fees as measured in BTC have also spiked
14:17:21ThinThread:almost forgot what dollars were
14:18:55gavinand1esen:dgenr8: that IS interesting, but makes sense to me-- prices spikes correspond with lots of new users, and more users == more demand for transactions == higher tx fees
14:19:20gavinand1esen:(well, MORE tx fees at least)
14:23:38dgenr8:gavinand1esen: more low-priority txes, less user sensitivity to fees
14:24:06dgenr8:gavinand1esen: fees paid as a % of value transferred could also explain it. not sure if anyone's doing that tho
15:37:41_biO__:_biO__ is now known as _biO_
15:53:26maaku:leakypat: there could be
15:53:48maaku:phantomcircuit: can we deploy full-RBF to our miner?
16:02:52c0rw1n:c0rw1n is now known as c0rw|away
18:21:24luny`:luny` is now known as luny
21:07:05leakypat:maaku phantomcircuit that would be cool, I'm going to look at prototyping something in the testnet version of my wallet (but no point if there is noone running the patch :)
21:17:22petertodd:leakypat: what wallet is yours?
21:17:53petertodd:leakypat: ah cool - I jsut setup a mainnet full-rbf dns seed btw for wallet authors, I'll set a testnet one too
21:19:00petertodd:my latest full-rbf tree has the dns seed support in it, rbf-seed.btc.petertodd.org
21:25:26petertodd:leakypat: one thing you should do, is write your replacement code so it's full-rbf compatible, as well as fss-rbf compatible. The way I did that in my rbf demos was I made a variable for the minimum allowed value of the change output, and the loop that adding new inputs to make the change sufficiently large then would either start at 0 for full-rbf, or the previous value for fss-rbf
21:28:29phantomcircuit:maaku, yes
21:28:40phantomcircuit:maaku, i'll get that working
21:38:38leakypat:petertodd: ok, I'll implement both as going to prod prob only feasible for fss (for the near future anyway)
21:40:23phantomcircuit:petertodd, which branch is the full rbf?
21:42:58roy:Am I right in thinking that not accepting non-final or soon-to-be-final transactions into mempool is a relatively recent change, or has that always been the case?
21:44:12petertodd:phantomcircuit: https://github.com/petertodd/bitcoin/tree/replace-by-fee-v0.10.2
21:44:16petertodd:leakypat: thanks!
21:44:50petertodd:leakypat: lemme know how that goes - if it's not as easy to implement both with the same code I'd be interested in knowing why
21:45:24petertodd:roy: that was my first big contribution to bitcoin actually - been true for 2.5 years now
21:47:22roy:petertodd: cool, thanks. Was trying to understand the history of the evolution of functionality around this area (prompted by BIP68)
21:50:56roy:Although this also reminds me that the bit in the devloper guide (talking about nlocktime and sequence numbers) at least one place where it glosses over the history in a patronising sort of "things used to be different, but don't worry about it" way that I'm 99% convinced is likely to frustrate any developer
21:51:53petertodd:roy: actually, that might be there because mike hearn tried to politicise it by pushing the old, broken, nSequence replacement scheme
21:52:06phantomcircuit:petertodd, testnet's majority hashrate is now rbf
21:52:12petertodd:phantomcircuit: haha, awesome
21:53:24roy:I might propose a pull request - it's in github, right? I don't want anything policical - just a very brief history of the semantics of transaction replacement, rather than saying things used to be different
21:53:28phantomcircuit:petertodd, and when i say majority i mean approximately 100%
21:53:53petertodd:phantomcircuit: I'm going to have to aquire some equipment again to decentralize that situation :)
21:54:03phantomcircuit:petertodd, you will lose
21:54:15maaku:phantomcircuit cheats
21:54:20phantomcircuit:i am king of testnet!
21:54:23phantomcircuit:king i say!
21:55:07roy:out of curiosity, what is the hash rate of testnet these days
21:55:20leakypat:phantomcircuit: nice!
21:55:35maaku:roy: two SP20 miners or so
21:55:46maaku:roy: while you're at it, fix the off-by-one misconception about locktime
21:55:56maaku:(you can't get in the chain until AFTER nLockTime)
22:12:41phantomcircuit:petertodd, neat confrimed working
22:13:32petertodd:phantomcircuit: wait, what's working?
22:23:39roy:roy is now known as roybadami
22:39:43CodeShark:does RBF handle chains longer than a single transaction or replacing two lower fee transactions with one higher fee one?
22:39:57petertodd:CodeShark: full RBF does
22:40:29petertodd:leakypat: just pushed to my v0.10.2 full-RBF tree the new testnet rbf seed, rbf-seed.tbtc.petertodd.org
22:40:55petertodd:leakypat: I also have a simple static dns record too, rbf-seed-static.tbtc.petertodd.org, and rbf-seed-static.btc.petertodd.org for mainnet
22:41:35CodeShark:leakypat, you're the ninki guy?
22:43:49leakypat:CodeShark: yes
22:44:08bosma_:bosma_ is now known as bosma
22:44:44petertodd:leakypat: what's the deal with the android wallet? can I use it without the desktop bit?
22:45:30leakypat:You can, but the idea is just to go through and complete the desktop setup
22:46:00petertodd:leakypat: cool, installing now
22:46:25petertodd:leakypat: let me know when you have some rbf code to test/review
22:46:39leakypat:petertodd: will do
22:47:21leakypat:CodeShark: you run a wallet?
22:53:30CodeShark:leakypat: mSIGNA
22:55:48CodeShark:it's not a problem for me to support fee increases for outbound transactions...or transactions sent from another node using the same account...but SPV makes it a little hard to compute fees for inbound transactions, generally speaking
22:56:39CodeShark:except for special cases (where the inputs are all the same)
22:56:55CodeShark:and even then it's only possible to compute the fee difference, not the exact fee
22:57:25CodeShark:I hate SPV :p
22:57:45CodeShark:or rather, I hate that transaction inputs don't contain the value
22:58:05CodeShark:or that there's no simple mechanism to query that information in an efficient, private manner
22:59:10CodeShark:but SPV also makes it basically impossible to check for double-spends involving longer chains unless you maintain your own mempool
22:59:48CodeShark:but you cannot build a mempool without having the utxo set
22:59:54CodeShark:so SPV is jacked :p
23:01:09CodeShark:please, please, please...let's start talking about moving to an O(log n) verification protocol that doesn't have a special "simplified" mode :p
23:01:48CodeShark:what matters, ultimately, are risk metrics, probabilities, and game theory :p
23:02:07CodeShark:and computational complexity theory and all that...but we'll take those as givens for now
23:02:55Luke-Jr:petertodd: wait, RBF includes full CPFP mempool logic?
23:06:00dgenr8:CodeShark: what if you got a "proto-block" message when the tip of an unconfirmed chain pays your bloom filter (this probably has a name, bip, something of which I'm ignorant)
23:08:02CodeShark:dgenr8: that would at least make certain logic possible on the client...but it involves extra overhead for the relay node and has essentially no security
23:08:20dgenr8:CodeShark: no security is just a consequence of 0conf
23:08:58CodeShark:dgenr8: point being not sure it's worth the effort for this use case...which is to have a way of warning the user that a double-spend has been detected
23:09:16CodeShark:if you can't even be sure that the proto-block is authentic, it's basically useless
23:09:26dgenr8:CodeShark: serving SPV is one of the more important functions of a full node imho
23:09:51CodeShark:if you want to use a trusted client-server model we can design a much higher txout request API :)
23:09:59CodeShark:much higher level
23:10:20CodeShark:SPV basically serves no niche here
23:10:29CodeShark:you either go full verification or you trust a server
23:10:39CodeShark:if you trust a server, might as well use high level queries
23:10:42dgenr8:CodeShark: "here"?
23:10:54CodeShark:no niche in this space...ultimately
23:11:08CodeShark:it's terrible at verification, it's terrible at usability, it's terrible for development
23:11:21CodeShark:it's a lose lose lose
23:11:38dgenr8:CodeShark: your idea of network-wide probabilistic verification is interesting
23:12:14CodeShark:I think ultimately we must abandon deterministic verification if we want any level of scalability (we do in a sense with sha256 already, but we could tolerate even higher failure probability)
23:14:11CodeShark:what really matters is that the risk level be computable...and that it be possible to either make failure rates negligible...or have mechanisms to mitigate failures
23:14:41CodeShark:as for mechanisms, they could involve human players as part of the ecosystem (i.e. insurance or market makers)
23:14:55CodeShark:basically, a way to manage risks
23:15:21CodeShark:probably far cheaper than forcing every single computer on earth to validate every single purchase of an espresso
23:15:24dgenr8:CodeShark: short-term, with SPV, if you could ask your peers to send parent chains, you can watch for double-spends.
23:16:00CodeShark:I already ask for the node mempool...and assuming the node is trusted, this isn't a problem
23:16:14dgenr8:how often?
23:16:23CodeShark:although it would probably be better to just add better query logic to the server side to simplify the client logic :)
23:16:53CodeShark:since the server perforce is better informed to make better decisions and presumably has more resources
23:17:29CodeShark:architecting it so that this logic is on the client side is totally stupid unless it provides some amount of additional security/privacy
23:18:44CodeShark:I query the mempool after the block sync...but then I set a bloom filter
23:18:59CodeShark:so it would probably make more sense to just get rid of the bloom filter after doing the historical sync
23:19:50CodeShark:and then just assume that transactions that don't connect to other transactions in the mempool must connect to the blockchain somewhere
23:20:02CodeShark:but argh...I mean...seriously?!?!
23:20:18dgenr8:CodeShark: i wonder what hearn thinks
23:24:24CodeShark:apparently this is what he thinks: https://github.com/bitcoinxt/bitcoinxt/blob/0.10.2A/src/main.cpp#L4107
23:25:56CodeShark:but without txout commitments, might as well write a REST API :p
23:26:29CodeShark:might as well just build a bc.i
23:27:32jgarzik:CodeShark, welcome to our arguments from ~12 months ago :)
23:27:42jgarzik:CodeShark, this is why getutxos isn't upstream...
23:28:10CodeShark:yes, I'm aware of that
23:28:34CodeShark:there really is no way to fix SPV
23:30:31CodeShark:SPV is completely busted and full validation is hitting a scalability wall...I hate to be a pessimist here...but....
23:31:13leakypat: but without txout commitments, might as well write a REST API :p
23:46:34CodeShark:* CodeShark summons the ghost of satoshi and asks WTF?!?!
23:48:53phantomcircuit:CodeShark, the getutxo stuff is entirely so that mike could collect his $100k for building lighthouse
23:48:56phantomcircuit:which is afaict useless
23:52:41CodeShark:I've unfortunately been in situations before where I had to deliver stuff for clients I knew wouldn't really work in the end...it can be hard not to be seduced into fooling yourself
23:55:27CodeShark:my awareness of this is largely why I actively refuse to let that happen here
23:56:23dgenr8:CodeShark: a parent chain with merkle proofs for all the inputs along the way doesn't work?
23:57:09CodeShark:dgenr8: there's no merkle proof for mempool transactions...but yes, you could have merkle proofs for inputs that do connect to the blockchain
23:57:37CodeShark:but again...is it really worth the effort? :)
23:58:04dgenr8:CodeShark: right i imagine having a full mempool involves adding a lot. we need libmempool
23:58:27akrmn:utxo commitments are a flawed system also, because there is no incentive for nodes to relay the merkle tree branches
23:59:03CodeShark:utxo commitments were my last hope of "fixing" SPV - but I give up - we need a new validation mechanism
23:59:38phantomcircuit:akrmn, and are stupid expensive to calculate
23:59:53phantomcircuit:O(n log n) n = utxo entries