--- Log opened Thu Jul 18 00:00:59 2013 08:46 < amiller> petertodd, the thing you described could just be a memory-bound proof-of-work function 08:47 < amiller> see https://research.microsoft.com/pubs/65154/crypto03.pdf 08:48 < amiller> also the H' you described is just a digital signature but a memory-hard pow would work just as well too 10:46 < petertodd> amiller: It's more subtle than that; I want to force my peers to consume memory, not do work 10:46 < amiller> well the point is that the work requires memory 10:46 < petertodd> I don't want a POW because I don't want it to be expensive for your usual client with some spare memory to connect, only expensive to make lots of parallel connections at once. 10:46 < amiller> hm. 10:47 < amiller> in that case i think you want the Hourglass scheme from RSA 10:47 < amiller> http://www.tablusqa.com/rsalabs/presentations/hourglass.pdf 10:48 < petertodd> Nah, this has nothing to do with having data or not. 10:48 < amiller> the simplest variation is based on signatures so it's pretty much what you describe... you sign a bunch of pieces of data (that's using the trapdoor) then require them to give it back to you 10:48 < amiller> so the only way they can give it back to you is to store it 10:48 < petertodd> But that requires bandwidth to give them the data. 10:48 < petertodd> I want something that only forces my peer to keep data in ram, ideally it is data then can generate from a seed. 10:49 < amiller> so you want to give them a concise seed, and force them to fill up a lot of memory their memory with it, then give you a concise digest at the end somehow 10:49 < petertodd> Yes, in a sense the "trapdoor" is that it's a function that's cheap to compute, but only if you have a big table in memory. 10:51 < amiller> hm. 10:55 < amiller> so the exponential memory bound proof of work function from fabian coelho is sort of like that too. 10:55 < amiller> http://193.55.130.53/~nitaj/AFrica08Slides/kyushu-pres_Coelho.pdf 10:56 < amiller> i think normally they pick whatever they want for the leaves. 10:56 < amiller> i think there's a way for that to work 10:56 < amiller> basically you construct the leaves from a seed 10:56 < amiller> then you find the root digest, then you use the root digest to help you sample the leaves... 10:57 < amiller> but the unique challenge here is that you want to make sure there's no storage shortcuts e.g. by having identical leaves 10:57 < petertodd> sounds pretty similar to what I'm proposing, done as a NI proof 10:57 < petertodd> well, in my case the proof can be done interactively 10:58 < amiller> either way you use the merkle root as a commitment then choose interactively if you like 10:59 < amiller> so you could give them a seed but then it if you only sample a couple leaves it would be hard to show they were computed correctly from the seed without checking the whole chain. 11:01 < amiller> i don't know how to solve that, hm. 11:03 < petertodd> for the SPV anti-dos you don't have to prove they've done anything, you only have to make it such that using up RAM allows the SPV client to return the correct result much faster than not doing so, and then prioritize based on that response time 11:04 < petertodd> The problem is that you'll need this table to be fairly large, because network latency is high and disks are fast. 11:08 < amiller> well you need them to fill up their memory with data 11:08 < amiller> and then query this data 11:09 < amiller> and you don't want them to be able to do computing on the fly 11:15 < amiller> so that requires putting full-entropy data in each spot 11:15 < amiller> otherwise you could cheat by compressing 11:25 < petertodd> yup 11:26 < petertodd> It's just a matter of how big is reasonable - a 100MB table makes life hard for phones for instance, yet disks are fast enough that we probably want something that size. 11:27 < petertodd> (it's also a trade-off of how often we query peers for the data, a disk can seek and grab 100MB quickly, but it can't serve 100x such requests at once) 11:28 < petertodd> 10MB isn't bad - that's 80MB of data for your standard 8 outgoing peers. 11:29 < petertodd> But it doesn't give that much protection either... 10k peers is just 10GB of ram, pretty cheap. 11:29 < petertodd> er, 1K peers 11:57 < realzies> yay guys 11:57 < realzies> SCIP 11:57 < realzies> got an email from eli ben-sasson today 11:58 < realzies> Dear Azriel, 11:58 < realzies> Some time ago you mentioned that you're interested in trying to write an LLVM backend for our TinyRAM spec. If you're still up to the task, we'll be happy to help out. I'm cc'ing the rest of the research team - Prof. Eran Tromer from Tel-Aviv U., my co-PI, Alessandro Chiesa (grad student @ MIT), Daniel Genkin (grad student @ Technion) and Madars Virza (grad student @ MIT). 11:58 < realzies> Our first paper using TinyRAM has been accepted for publication at the CRYPTO'13 conference this August. A draft is attached, and the full version will be posted in the next few weeks (we need to finish writing down some full-system performance numbers). We think this will make much more concrete our motivation in this project, and also the design choices for TinyRAM (see Section 2.1). 11:58 < realzies> Also attached is the a draft of the tinyRAM spec, and the only reason it's just "0.99" is in case we get suggestions for improvements before we publish it. 11:58 < realzies> All in all, we're very close to a time where everything about TinyRAM is public and ready for open-source development. 11:59 < realzies> Are you still interested in developing a TinyRAM LLVM backend? If so, we'd love to support your effort in any way. 11:59 < realzies> Best, 11:59 < realzies> Eli 11:59 < realzies> (2 attachments) 11:59 < realzies> gmaxwell: ping 11:59 < gmaxwell> petertodd: ISTM you really want a trapdoor functio nthat one party can compute in parallel, the other party must compute sequentially. 12:00 < gmaxwell> petertodd: then your query the sequential party for the Nth output and the a bunch of 0-N. Memorization is the cheapest way to compute the answers. 12:01 < realzies> shall I upload the pdfs somewhere 12:03 < gmaxwell> realzies: tinyram looked super simple. 12:04 < realzies> exactly why I eager to make an LLVM backend 12:04 < realzies> it should be simple 12:05 < realzies> https://docs.google.com/file/d/0Bx3Ty2UX6yDLSnM3aU04YUFSNU0/ 12:06 < gmaxwell> I expect a lot of value can be added by adding a bunch of tinyram specific peephole optimizations (esp if you know the true cost of varrious opcodes) 12:06 < realzies> https://docs.google.com/file/d/0Bx3Ty2UX6yDLSnM3aU04YUFSNU0/edit 12:06 < realzies> mmm indeed 12:20 < gmaxwell> realzies: I wonder why they bother keeping the primary input in a tape when they require to you load it into memory in the preamble? 12:20 < gmaxwell> why not just eliminate the preamble and say that the input is in memory? 12:21 < realzies> mmm I mislinked the other pdf: https://docs.google.com/file/d/0Bx3Ty2UX6yDLeUdVODY4M3M4QWM 12:25 < realzies> gmaxwell: maybe that would make unnecessary requirements for the initial memory 12:25 < realzies> ie. now, perhaps, memory is assumed to initialize all-zero 12:25 < realzies> just a guess? 12:25 < gmaxwell> realzies: the preamble effectively creates that requirement. 12:26 < realzies> yeah just read the "initial state" section 12:26 < gmaxwell> I suppose a different prover that still reads tinyram might have a different preamble requirement. 12:26 < petertodd> realzies: congrats! 12:26 < realzies> petertodd: heh, those go to eli 12:26 < realzies> and his team 12:27 < petertodd> realzies: well, for them I just have stunned wonderment... 12:27 < realzies> :D 12:28 < gmaxwell> So, they said that their proofs are 6156 bits for 80 bit security. 12:29 < petertodd> gmaxwell: I'm not sure if squential vs. parallel is the issue here - the function should be sequential only, but it should also be something where a big table acts as a trap-door 12:29 < petertodd> gmaxwell: sure it's nice if the defender can compute it in parallel too, but that's not the issue - they only have to have one copy of the trap-door table 12:31 < gmaxwell> if the defender only has one table then can't attackers cooperate to store one table themselves? 12:31 < gmaxwell> e.g. defender has 100 MB, and an attacker has 100MB shared by his 1000 sybils. 12:32 < petertodd> Of course, but the only way they can co-operate sufficiently fast is to just be one machine, and we're back to the fact that a single high-speed, high-bandwidth machine can perform a DoS attack. 12:32 < petertodd> (high # of IPs too) 12:33 < petertodd> I'm just trying to prevent someone from attacking multiple targets at once. 12:33 < gmaxwell> realzies: so I think those numbers do suggest that zk-snarks are viable as a SCRIPTSIG in a blockchain currency on bandwidth,storage grounds. 12:34 < gmaxwell> sadly, a botnet actually has surplus computation. :( 12:36 < petertodd> of course, but doing this does force them to use cpu-time/ram, which gets their resource usage to a point higher than the defenders sum resource usage 12:37 < petertodd> We know we can't win against an arbitrarily large attacker, but we can make the minimum attack resources orders of magnitude higher than they are now. 12:38 < petertodd> Right now it looks like a small number of EC2 nodes would make SPV clients unusuable after all... 12:40 < gmaxwell> there are multiple facets of defending against this. 12:41 < gmaxwell> Obviously you can make the attack more expensive. 12:41 < gmaxwell> Another thing would be to make it more easy to moot: 12:41 < gmaxwell> Give every node a second authenticated listening port that you can only connect to if you know some node key. 12:41 < gmaxwell> Give every client the ability to just drop in some addr:node-keys settings. 12:42 < gmaxwell> Then if an attack happens, you obtain keys from a couple friends... 12:42 < gmaxwell> and then you are attackproof 12:42 < gmaxwell> (of course, you could do this before the attack happens too) 12:44 < petertodd> Yeah, you can defend by creating a darknet basically. 12:44 < petertodd> Similarly SPV nodes can simply connect to friends. 12:45 < petertodd> Basically we're just looking for a way to distinguish a valid SPV node from one run by an attacker, and we do that by making it expensive to connect in a way that we can afford. 12:45 < petertodd> You can just as equally ask for SPV nodes to give you a fee-paying transaction, and kick them if it doesn't get mined. 12:46 < gmaxwell> people are spazzy about fees. 12:46 < gmaxwell> I imagine that 10x that _actual_ electricity cost in POW is acceptable over a fee. 12:50 < petertodd> Yeah, but remember we're talking about Android clients here - the cost to do a PoW is huge for them. 12:51 < gmaxwell> I know. 12:53 < realzies> gmaxwell: mmm 12:55 < petertodd> The other nice thing, is if you are thinking about trying to prevent someone from peering with the whole network, you can scale the work/resources required by the % of 1's in the peers bloom filter. (100% if they don't specify one) 12:57 < gmaxwell> sadly, lots of ones in the bloom filter doesn't prevent you from becoming cpu/disk bound. 12:58 < amiller> realzies, any chance you'd share the draft with me 12:58 < amiller> of the tinyram paper 12:58 < amiller> i'll ask permission myself if you don't want to violate implied confidentiality but it shouldn't be a big concern because it has alreay been peer reviewed 12:58 < petertodd> I'm thinking lots of ones means they'll match on a high % of the transactions, and thus give you visibility to the state of the network. 12:59 < petertodd> IE we want it to cost just as much to act as a full peer to snoop the network, as to act as a few SPV nodes to snoop the network. 12:59 < amiller> realzies, i realized you already pasted link to the tinyram spc, but i mean the crypto 2013 paper nsarks for c 13:01 < gmaxwell> amiller: it's the last link. 13:01 < amiller> there's just two link and they're both the same? 13:01 < amiller> except for 'edit' 13:02 < realzies> amiller: I did 13:02 < gmaxwell> 09:21 < realzies> mmm I mislinked the other pdf: https://docs.google.com/file/d/0Bx3Ty2UX6yDLeUdVODY4M3M4QWM 13:03 < realzies> ahh 13:03 < realzies> ^^ 13:03 < amiller> i must have pinged out 13:03 < realzies> ty gmaxwell 13:08 < gmaxwell> So their verifier runs in 50ms for input (number of field elements) size 2^6. 13:10 < gmaxwell> (this is on a multicore 2.4ghz opteron box, but I expect that verification time is not parallel) 13:10 < gmaxwell> the proving is slow though. 13:13 < gmaxwell> For a circuit of size 2*10^6 it takes them 66 minutes (and this box has 48 cores) 13:13 < gmaxwell> The circuit size is effectively 1200 * number of tinyram cycles. 13:14 < gmaxwell> (cycles meaning execution time) 13:22 < amiller> how much is that per proof in EC$ 13:23 < amiller> assuming all of the coordination issues and latency are solved, and you just have to post an appropriate btc bounty to get the horde to work on it, that's still a lot of power 13:23 < gmaxwell> the proving is highly parallel fortunately. 13:23 < amiller> if you can relate the cost of computing a hash to the cost of one of these field ops, you could bound the number of these per day using the current PoW network 13:24 < amiller> you know, in the idealistic unfathomable case that all the network's Work actually coincides with such proving 13:25 < gmaxwell> I'm unsure as to what model you're imaginging where the network is expending computation on proving. 13:26 < gmaxwell> e.g. thats inapplicable to using SCIP as scriptsigs. 13:28 < gmaxwell> I'd say maybe in proving that the transactions in a block are valid... but that has an unfortunate property of making the POW work proportional to the number of transactions in a block... which is undesirable. 13:28 < gmaxwell> though does create some natural bounds on scalablity! 13:28 < gmaxwell> hm.... 13:28 < amiller> i don't think it necessarily has that unfortunate property but it's interesting - anyway still even just with the scriptsig case... 13:29 < amiller> the point is it's a lot of work but it's easy to check, so it would be nice to use bitcoin as a way of outsourcing it to the public 13:29 < amiller> vanity address mining is the closest analogy 13:29 < gmaxwell> petertodd: What would happen to the concerns about the blocksize limit if instead block difficulty were diff*f(transactions) ? 13:29 < gmaxwell> amiller: oh thats irrelevant. 13:30 < petertodd> petertodd: Doesn't change anything IMO because diff has nothing to do with censorship-resistant bandwidth. 13:30 < petertodd> er, gmaxwell: 13:30 < gmaxwell> amiller: you do the work for your vanity generation _outside_ of the SCIP enviroment. Then you use only the SCIP to get a signature of knoweldge for a faithful answer. 13:30 < petertodd> gmaxwell: On the other hand, I *really* like jdillon's voting scheme. 13:31 < gmaxwell> petertodd: F() might as well be a function matching the two. 13:32 < petertodd> gmaxwell: If diff has anything to do with it, people can make it irrelevant by voting with diff taken into account. 13:33 < gmaxwell> petertodd: Was this a proposal to use PoS to vote for parameters like that? 13:33 < petertodd> gmaxwell: Yes, and a very clevery done one that can't be manipulated by miners. 13:34 < gmaxwell> petertodd: the thing I don't like about that (ignoring solving the censorship problems) is, of course, that reduces to "give mtgox or blockchain.info unilateral say". 13:34 < gmaxwell> Current control over funds is not exactly 1:1 with empowering the users of bitcoin. 13:34 < gmaxwell> Uh, but it's probably better than letting miners pick. 13:34 < gmaxwell> How does John solve the problem of miners denying sufferage? 13:34 < petertodd> gmaxwell: Basically, your vote is what *enables* a miner to prove to the world that the people holding Bitcoins want the blocksize to be something. A txout without such a vote is a vote for the status quo, and txouts age over time to account for lost coins. (after one year) 13:35 < petertodd> gmaxwell: Basically the scheme recognizes that miners can always reduce the blocksize limit, but forces them to prove concent of bitcoin holders to raise it. 13:36 < gmaxwell> ah, thats interesting. Making it one-sided removes the censorship risk. 13:37 < gmaxwell> Sadly that doesn't prevent bitcoin from comitting suicide, but at least it would be with the consent of people that own a bunch of it. 13:37 < petertodd> Yup. I'm happy if Bitcoin is destroyed with the concent of those holding Bitcoins myself. 13:37 < petertodd> *consent 13:38 < petertodd> From a practical perspective, it also takes a lot of politics out of the situation IMO. 13:39 < gmaxwell> Well, to be clear: it's some kind of 'majority' consent... which means that some people holding bitcoin will not consent to the suicide. But the alternatives sound worse. 13:39 < gmaxwell> (e.g. alternatives being technical guy political tournamants and fork-risking-wars over client software) 13:40 < gmaxwell> I think ideally would have been to establish bitcoin with initial parameters that could be kept forever. 13:40 < gmaxwell> But since that seems to be impossible, having an economic majority seems like the next best thing. 13:41 < petertodd> Yup, see Peter Vessenes comments about how much a fork would harm bitcoin: https://github.com/pmlaw/The-Bitcoin-Foundation-Legal-Repo/pull/4#issuecomment-18988575 13:42 < petertodd> In a sense the presense of alt-coins makes it always be an economic majority thing, but the process of people dumping bitcoin for another coin will be really ugly. 13:42 < petertodd> Much better if we come to consensus on an equitable process to choose the limit. 13:43 < petertodd> It'll still lead to PR campaigns and the like of course, but those efforts become less relevant to the dev team. 13:45 < petertodd> The voting method is also designed such that an SPV client can verify the vote, and in particular, that means even if you don't hold the coins directly you can verify the person you did voted according to your wishes. (or the majority of a banks clients wishes for instance) 13:46 < gmaxwell> petertodd: can it support key delegation? in particular I should be able to take my coin signing keys offline. 13:46 < realzies> so imma start up an llvm backend project, and see where I can go 13:46 < realzies> I've never dealt with LLVM backend api, so its gonna be a learning experience 13:46 < petertodd> gmaxwell: With scripting support, yes. 13:46 < realzies> but first, breakfast 13:47 < petertodd> gmaxwell: The idea is a vote is considered valid if a scriptSig matches a txout scriptPubKey, so just add a special OP_VOTE thing - would work best with MAST support. 13:47 < gmaxwell> wow, you seem to have politically influenced vessenes. 13:48 < petertodd> Well, jdillon too. 13:49 < gmaxwell> One problem with the vote thing— I expency is there is an uncountably infinite number of free parameters. 13:49 < gmaxwell> e.g. how fast can the parameters be changed, what are the maximums and minimums. 13:49 < petertodd> For sure, such votes can be extended to anything... 13:50 < petertodd> You could just as easily vote on the coin distribution schedule. 13:50 < gmaxwell> Yes, _HOWEVER_, as I said above the ideal is that we have something and that it never changes— let people switch currencies if we got it that wrong. 13:50 < petertodd> But then again, changing the blocksize is setting precedent that we're willing to change an economic parameter too. 13:51 < gmaxwell> But well, that doesn't work when basically everyone can agree that the paramter is probably not right at least not right forever. 13:51 < gmaxwell> I think we can all agree that the distribution schedule is right enough forever. 13:51 < petertodd> Yeah, well, something I realized recently was you can construct a PoW function for an alt-coin that forces miners to prove they've attacked Bitcoin. 13:51 < gmaxwell> And changing it against the consent of some would be no better than letting people change currencies on their own. 13:52 < gmaxwell> petertodd: oh sure, trivial to do. merge mine with bitcoin and constrain it to only be 'bad blocks'. 13:52 < petertodd> Yeah, anyway, if there *was* a strong movement to change the distribution schedule, well, it'd be better to do it with a vote that by fiat. 13:53 < gmaxwell> Whereas with blocksize, I do think that changing it with the consent of most but not all is actually still politically and morally superior to saying "fuck you, switch to fatcoin". 13:53 < petertodd> gmaxwell: Yup, and make those bad blocks empty aside from a bunch of UTXO spam... 13:53 < petertodd> Yeah, and what jdillon proposed was to calculate the median of the votes, which means that everyones vote did count. 13:55 < gmaxwell> I'll have to look at the details later, I'm still getting myself comfortable with making the blocksize controlled that way. 13:55 < petertodd> Yeah, and details matter - I don't think you can prove a median was calculated accurately without all votes for instance. 13:56 < gmaxwell> I suppose you could gain traction for a particular implementation by proposing them and— externally to the blockchain— gain POS signmessages. 13:56 < petertodd> Ha, yeah for sure. 13:56 < gmaxwell> petertodd: yes, I would have instead expected something where each block commits to a set of votes, and the block hash picks a representative vote. 13:56 < petertodd> gmaxwell: Yup, NIZK-style random vote. 13:57 < petertodd> gmaxwell: He did say that the per-block vote should be median, and to then take the mean of the blocks - that can be proved incrementally. 13:58 < gmaxwell> one problem with voting is that many voters will be pretty indifferent. It will be easy to buy their votes. 13:58 < petertodd> Oh, and the nonce for the NIZK proof should probably be taken by getting the LSB of the last 64 blocks... 13:59 < gmaxwell> does that matter? 13:59 < petertodd> Sure, but it's ultimately an economic power vote anyway - what I'd be more worried about is wallet software that votes behind users backs. 13:59 < gmaxwell> If the current block goes into the proof, which it must.. then you could search for your favorite vote. 13:59 < petertodd> Yes, because you want to make sure that you can't apply more hashing power to mess with the vote. 13:59 < gmaxwell> petertodd: yea, except you don't solve that. 14:00 < gmaxwell> e.g. H(last block .. this block) is no better than H(this block) for picking the resulting value. 14:00 < petertodd> Sure I do, if the LSB of the current vote only allows you to influence the path taken at the bottom of the tree, they you have the least possible control. (if the bottom is sorted) 14:01 < gmaxwell> then you can deny entry into the tree for selected votes to get two votes you like into the position decided by that bit. 14:01 < gmaxwell> and then you get complete selection with only 1 bit more work. 14:01 < petertodd> Right, but the miner choses what votes to include in the first palce. 14:01 < petertodd> *place 14:02 < gmaxwell> I'll have to go read jdillion's thing then, as I'm not quite following how its really solved. 14:02 < petertodd> We're only trying to make sure they can't include 10 votes, and claim all 10 were for the highest size. 14:03 < gmaxwell> so, maybe it would help the proposal: but I would suggest that engineering sanity constrains the maximum rate of blocksize change. 14:03 < gmaxwell> And so instead of people voting on a particular size they could just vote for larger or not. 14:04 < gmaxwell> and stop voting for larger when its large enough. 14:04 < petertodd> Yeah, he's done that to a degree: if the size goes up, and people stop voting, the status quo votes are for the average of the new and old size, so the size will automatically start going down again. 14:04 < petertodd> One issue with sanity constraints is picking the rate of max change is in itself political... 14:05 < gmaxwell> yea thats what I was talking about uncountable paramter space. 14:05 < gmaxwell> But I think it's less bad. 14:06 < gmaxwell> The exact value is debatable, but I think I can say "whatever it is, it shouldn't be faster than doubling every year" and I think no one would argue. 14:07 < petertodd> Hmm... given the votes are essentially part of the UTXO set, actually what the miner does is add votes to that set, and the NIZK is then picking representative votes - it is acceptable to then calculate the median of the votes for the blocks in the past year in that case. 14:07 < gmaxwell> maybe the downward limit is harder to guess. 14:07 < petertodd> gmaxwell: I'm sure Mike would. :P 14:07 < gmaxwell> I don't think he would, or if he did he'd give up easily. 14:07 < petertodd> Yeah, in jdillons proposal with miner consent the limit can drop as fast as the users want it too. 14:08 < gmaxwell> doubling every year is really really fast. It's faster than expected computer scaling. 14:08 < petertodd> Which is interesting: a 50% economic majority, with 50% hashing power, can vote to shutdown Bitcoin. 14:08 < gmaxwell> and yet it's still slow enough that you can plan for it. Every fiscial year plan to double the amount of storage you're already using. :P 14:08 < petertodd> True, doubling works for that. 14:09 < gmaxwell> petertodd: should there be a minimum maximum? on one hand, it's stupid to vote it down to nothing. OTOH miners can already do that. 14:09 < gmaxwell> the vote would just make it easier for miners to coordinate doing that. 14:09 < petertodd> Heh, you could say every year we pick a representative UTXO, and if they voted to double, we do. 14:10 < gmaxwell> petertodd: variance is a bit high on that. :P 14:10 < petertodd> Yup, I don't see anything wrong with that, and after all it *does* require 50% majority of miners. 14:10 < petertodd> A 50% majority can always chose to ignore the minority including those votes. 14:10 < gmaxwell> petertodd: just for technical reasons, a limit might make sense, because, uh. you don't want to actually stupidly end up in a state where a next block isn't possible. :P 14:11 < petertodd> Yeah, heck, a lower limit of 1MB would probably be fine. 14:11 < petertodd> Maybe say 100KB for sake of argument. 14:12 < gmaxwell> making it somewhat small means that from day 1 people would need to vote to keep the size up, thats probably good. 14:12 < gmaxwell> e.g. you want to actually make the minimum smaller than the current need so the need to vote doesn't surprise people later. 14:13 < petertodd> The thing is a non-vote is always a vote for the status quo, so people *don't* need to vote if they are happy. 14:13 < petertodd> (or just want the limit to reduce a bit) 14:13 < gmaxwell> petertodd: how do you vote for a reduction? 14:14 < petertodd> You vote for a reduction and a miner can chose to include it. 14:14 < petertodd> *choose 14:14 < petertodd> (john thought some % of the block limit should be reserved for votes FWIW) 14:14 < gmaxwell> hm. perhaps instead the vote-absent-target should be some median of the last N block sizes. 14:15 < gmaxwell> Since miners can already drive it down to nothing regardless of what the voters think. 14:15 < petertodd> That's what john proposed, the limit changes once per year, and a non-vote is a vote for the median of last years and this years limit. 14:15 < gmaxwell> not a median of the limits, a median of the observed blocksizes. 14:15 < petertodd> Basically that's just there so that if a too-high size allows for censorship, the limit will gradually reduce. 14:15 < petertodd> But that means miners can just pad blocks to change peoples status quo votes. 14:16 < gmaxwell> petertodd: yes, so then they stop voting. 14:16 < petertodd> But you can't *not* vote the status quo except by voting something else. 14:17 < gmaxwell> or to be more clear— miners actual observed behavior _is_ the status quo. 14:18 < gmaxwell> petertodd: median(blocks) < limit < 2*limit. You're voting if the limit should be closer to median(blocks) or 2*limit. 14:18 < gmaxwell> if you don't vote, thats a vote for the median, and the limit will fall. 14:18 < petertodd> Hmm... that's reasonable. 14:18 < gmaxwell> (as the median must always be smaller than the limit) 14:18 < gmaxwell> the speed at which it falls depends on the miners behavior. 14:19 < gmaxwell> it will fall slowly if they're consistently right at the limit. 14:19 < petertodd> Although it's easy for all miners to decide to pad blocks to keep median(blocks) == limit 14:19 < gmaxwell> maybe median(blocks)-ε just incease they .. rigt 14:19 < gmaxwell> er right. 14:19 < petertodd> With jdillons proposal, the limit *will* fall even in that case. 14:19 < petertodd> For that matter, not all miners, 50% majority of miners. 14:20 < gmaxwell> yea, doesn't actually even need to be median, it could be a mean or some kind of weighed mean. 14:21 < petertodd> I'd just keep it as vote for 2*limit or vote for limit/2 in that case, pick a representative UTXO for each block, and calculate weighted mean for the past years worth of blocks. 14:21 < petertodd> Every step of that is cheap to prove. 14:22 < gmaxwell> So that has stability problems, I think. 14:23 < gmaxwell> basically, if blocks are full and you're like "fuck! I have more bandwidth, I want cheaper transactions" 14:23 < gmaxwell> you'll be voting 2* all year long with all your friends. 14:23 < gmaxwell> maybe you really only needed a 10% bump. 14:23 < gmaxwell> you'll be pissed alll year and then get a great big step when you really only needed 10% (but you don't _know_ you only needed 10%) 14:24 < gmaxwell> so it should probably be more continious to facilitate discovery. 14:24 < gmaxwell> One problem is that a rolling window has a high group delay. 14:25 < petertodd> Hmm... make the limit change every block, by 2 / (1year/10minutes) ? 14:25 < gmaxwell> so you're voting 2* for a long time, and then finally it really goes up.. and keeps going up even though you're like "fuck, too big!" 14:25 < gmaxwell> so there is a tradeoff there. 14:25 < petertodd> Yes, but everyone can spend their txouts to change their votes. 14:26 < gmaxwell> okay, I'll accept that its acceptably soluable. 14:26 < petertodd> Of course, in the context of computer systems, chances are 2x isn't really a big change. 14:27 < gmaxwell> well not just computer systems. 14:27 < gmaxwell> this is needed to keep fees up to prop up difficulty. 14:28 < petertodd> Against an attacker is does 2x feel like much safety margin? 14:31 < petertodd> Oh nice, so 1year/10minutes = 52,560 ~= 2^16, so the code can simply find a representative UTXO, and if the vote is to raise, do limit += limit>>16 14:31 < petertodd> If the vote isn't to raise, do limit -= limit>>17 14:32 < petertodd> oh, wait, no I'm an idiot... --- Log closed Fri Jul 19 00:00:02 2013