00:01:35 | nsh: | Final Report on Main Computational Assumptions in Cryptography -- http://cordis.europa.eu/docs/projects/cnect/6/216676/080/deliverables/001-DMAYA6.pdf |
00:02:29 | nsh: | seems strange that PKE isn't reducable to another problem |
00:02:39 | nsh: | and someone states elsewhere that it's not falsifiable |
00:45:44 | gmaxwell: | nsh: I've mentioned that here before. |
00:46:24 | nsh: | oh? |
00:47:58 | AlexStraunoff: | AlexStraunoff is now known as Sqt |
00:49:13 | gmaxwell: | It the non-falsifiable problems generally apply fairly generally for succinct ZKP. |
00:49:16 | gmaxwell: | Relevant paper; https://eprint.iacr.org/2010/610.pdf |
00:50:20 | nsh: | ty |
01:03:52 | gmaxwell: | nsh: the paper is dense IIRC; but the whole idea reduces to a kind of trivial idea (if I remember it correctly). |
01:04:36 | gmaxwell: | Basically say you have a block box that produces false proofs for a system; you want to use this to show that some underlying hard problem is not really hard (e.g. by using the black box to solve the hard problem). This is the normal black box reduction approach. |
01:05:48 | gmaxwell: | They show that a succinct ZKP can be totally busted, and you can have an efficient attacker in a black box; and yet may still not be able to show any particular underlying hard function is insecure; because the black box may only produce false proofs for NP statements so complex that no one could possibly verify them; so you can't actually tell if the false proofs are false proofs. |
01:06:25 | gmaxwell: | And so you cannot get an unconditional reduction where access to a blackbox false proof maker is guararenteed to tell you anything useful at all. |
01:06:35 | gmaxwell: | Or at least that was my understanding of it. |
01:09:15 | gmaxwell: | There is of course much more to formalize and complete the argument. I also think it had a limit that it doesn't hold if basically P==NP or if all cryptographic assumptions are actually false. :) |
02:03:46 | kanzure: | "The problem with this implementation is you can still drive the block size up easily and in not that much time. So block congestion is still a tool for the resource rich. A self-scaling method will require the block size to be able to go both ways. We basically need a formula that helps incentivize miners to keep the block size as small as possible while still allowing it to increase. Or put another way, one that punishes miners for ... |
02:03:52 | kanzure: | ... submitting larger blocks relative to the current max by reducing the next valid block size by some proportion of max / current block size (probably not a linear one). As TX fees become a larger percentage of miner revenue, miners will value scarcity in the block anyway, so ultimately we need a way to punish the network for more transactions. We know that miners still indirectly value more transactions because more transactions ... |
02:03:58 | kanzure: | ... implies more adoption, which implies more demand for BTC, which implies a better exchange rate. There's a way to balance these things, we just have to figure it out." |
02:04:21 | kanzure: | from http://www.reddit.com/r/Bitcoin/comments/35ao1e/xyz/cr2q1ft |
02:55:51 | wallet42: | wallet42 is now known as Guest5910 |
02:55:51 | wallet421: | wallet421 is now known as wallet42 |
03:10:09 | maaku: | kanzure: that sounds somewhat familiar.... |
03:11:38 | kanzure: | oh was that you |
03:13:20 | 21WAB31HA: | 21WAB31HA has left #bitcoin-wizards |
03:13:40 | maaku: | no it wasn't, but he's basically asking for the exact thing I posted to the list within an hour or so of his reddit comment |
05:34:49 | amincd_: | criticism welcome: http://www.reddit.com/r/Bitcoin/comments/35d9ek/the_scalability_triangle_we_can_have_two_of_three/ |
08:05:14 | wolfe.freenode.net: | topic is: This channel is not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja |
08:05:14 | wolfe.freenode.net: | Users on #bitcoin-wizards: andy-logbot Logicwax Mably lclc p15 orperelman s3gfault dEBRUYNE_ gill3s kmels antanst hktud0 p15x_ NkWsy b_lumenkraft shesek HostFat priidu hashtag ThomasV jhogan42 TheSeven Dr-G2 elastoma jtimon mkarrer DougieBot5000 felipelalli fanquake GAit face justanotheruser maaku jmcn_ Starduster_ sparetire cluckj waxwing miisly andytoshi bosma grubles LeMiner gielbier btcdrak stonecoldpat bedeho Adlai helo hashtagg veox fluffypony sparetire_ uumdbmd |
08:05:14 | wolfe.freenode.net: | Users on #bitcoin-wizards: _whitelogger Fistful_of_Coins cpacia PRab yoleaux prodatalab Jaamg dansmith_btc melvster dc17523be3 arubi_ hulkhogan_ Emcy binaryatrocity d1ggy se3000 dgenr8 rustyn deego theymos Cory sipa scoria Eliel azariah isis NeatBasis kyuupichan metamarc gnusha amiller kanzure harrow ebfull leakypat xabbix HM iddo michagogo tromp_ harrigan_ copumpkin Madars mariorz epscy vonzipper catcow a5m0_ smooth dignork ttttemp_ pollux-bts runeks coryfields |
08:05:14 | wolfe.freenode.net: | Users on #bitcoin-wizards: CryptoGoon Sqt CodeShark poggy jbenet cfields platinuum adams__ livegnik sneak K1773R Alanius nsh tromp petertodd brand0 c0rw1n yorick koshii ir2ivps5 sadoshi jgarzik OneFixt richardus luny Krellan null_radix PaulCapestany nephyrin phedny so davout phantomcircuit afdudley cdecker lmacken jonasschnelli pigeons Luke-Jr SwedFTP BananaLotus guruvan Meeh optimator bliljerk101 lnovy ajweiss wiz nanotube dardasaba__ forrestv artifexd mappum yrashk |
08:05:14 | wolfe.freenode.net: | Users on #bitcoin-wizards: mikolalysenko Muis [d__d] lmatteis airbreather warren gmaxwell weex_ Xzibit17 GreenIsMyPepper sdaftuar eric roasbeef throughnothing s1w CryptOprah huseby dasource mm_1 Taek morcos crescendo nickler Zouppen EasyAt_ Iriez wumpus merlincorey [ace] sturles berndj comboy jaromil catlasshrugged_ Apocalyptic cryptowest_ Graet indolering Keefe larraboj ryan-c jessepollak gribble mr_burdell d9b4bef9 starsoccer kumavis otoburb midnightmagic BlueMatt |
08:05:14 | wolfe.freenode.net: | Users on #bitcoin-wizards: TD-Linux wizkid057 Anduck luigi1111 gavinandresen AdrianG BrainOverfl0w @ChanServ Oizopower gwillen kinlo sl01 STRML warptangent espes__ |
09:48:07 | Jaamg: | amincd_: my take on the subject (copy paste from the reddit thread): we should have as big of a blockchain as is technologically possible. After we have that, we let the market decide which transactions happen where. If as big of a blockchain as is technologically possible is not enough for all blockchain-preferred transactions, then Mike Hearn's Crash Landing is what happens. |
09:48:17 | Jaamg: | but I'm just a random lurker |
09:52:05 | gmaxwell: | The arguments on that page don't match our expirence with full blocks at the end of 2012/ beginning of 2013; not really make much sense to me no matter what the supply is that has no relation to the demand; blocks could be 200MB (ignoring the viability of that in general) but then someone could offer 201MB of load; just like in 2012 a single really inefficient service pushed the network against i |
09:52:11 | gmaxwell: | ts limits. |
09:52:30 | gmaxwell: | So that page really seems to be arging that blocks need to be unboundadly large because things will fail if they're full; but failure isn't what we observed previously. |
09:55:28 | Jaamg: | I'm not saying that complete failure happens if the blocks are full. What happens is that not all blockchain-preferred transactions can not happen in blockchain. |
09:55:48 | Jaamg: | Maybe they happen somewhere else or maybe they don't happen at all. |
09:56:26 | Jaamg: | s/can not/can |
09:56:30 | gmaxwell: | yea, thats fair; it's also just a consequence of resource allocation; there is a actual true limit in the supply of well decenteralized network capacity... so not all potential users may be able to get to it. |
09:57:45 | Jaamg: | yes, I agree |
10:00:48 | Jaamg: | So it's kind of a trade off wheter the potential user is left out from the blockchain because of too tough tech requirements for running a node or too little capacity in the blockchain |
10:01:05 | Jaamg: | maybe I'm just being captain obvious in this |
10:02:10 | gmaxwell: | sipa says, if the size is too large the system is moot because no one can verify so the lack of security makes it worthless, if the size is too small it's secure but thats irrelevant because you can't get access. Equlibrium is someplace in-between. |
10:10:51 | Jaamg: | yes, I agree that too large blocks lead into a situation where only few "professionals" can upkeep a node. And few professionals are easier target for Jack-booted thugs. I guess what was missing from my original argument was what I meant by "as big as technologically possible" |
10:12:15 | gmaxwell: | yea, I think I actually understood you there. if I thought you meant possible at all.. well thats a ton; if you want to e.g. admit a full rack of equipment. :) |
10:12:18 | Jaamg: | It's of course a hard question. I think that tech requirements should not be more than what is possible with "standard home computer / net" ...or maybe even "highish end home computer / net" |
10:13:22 | Jaamg: | yea, I know you understood. But also noticed that my original argument was a little bit vague in that sense |
10:13:40 | gmaxwell: | Jaamg: well I think more like 'using a small enough fraction of reasonably performant hardware/connectivity that its not a nussance' |
10:14:06 | gmaxwell: | which also means it can run on somewhat less performant setups (but using most of its resources). |
10:15:25 | gmaxwell: | e.g. ignoring the bad bufferbloat interaction, bitcoin core on a multimegabit cable connection and a modern i7 quad core with 16gb ram is basically unnoticable right now. But it still runs on the slowest DSL you're likely to find plus a rasberrypi; though using all the resources and behaving obviously slowly. |
10:15:58 | gmaxwell: | This is important to because not all the world is the most developed parts of the world. |
10:16:41 | amincd: | the argument that past a certain average size of block, Bitcoin is simply not secure/useful, makes sense to me. However, what I see as the open question is: would the larger economy associated with more legitimate (non-spam) transaction data attract more users willing to run full nodes, thus compensating for the loss of full nodes from higher node operating costs |
10:17:37 | gmaxwell: | amincd: "number of nodes" is not a concern, it's not a capacity issue. If there was a capacity shortage; I'd plunk a credit card into ec2 and add a lot of capacity personally. |
10:18:07 | Jaamg: | yes, it's important "what is behind a node" |
10:18:08 | gmaxwell: | Node usage is a security issue; the criteria is how hard it is to violate the network's security properties due to users protecting themselves by not just trusting others. |
10:18:27 | Jaamg: | what kind of services/users/etc |
10:18:49 | amincd: | gmaxwell, number of independent parties running full nodes? |
10:19:17 | gmaxwell: | someone that just starts up a bunch of ec2 nodes isn't really helping on this... it may add some somewhat useful capacity; but capacity can be added in many ways. outright capacity doesn't strictly need much decenteralization. |
10:19:33 | gmaxwell: | amincd: and amount of economic activity indivigually protected by their own nodes. |
10:20:54 | fluffypony: | node centralisation is far more concerning than miner centralisation |
10:21:29 | fluffypony: | with miner centralisation they can certainly be selfish, but they can't change the rules without the vast array of non-mining nodes rejecting their blocks |
10:21:41 | amincd: | SPV users trust others, yes, but it's not trust of the same manner that underpins a classic bank-customer relationship. It's only trusting that >50% of mining power is not trying to defraud them, and trusting that they are not isolated from the network and connected exclusively to cancer nodes |
10:22:06 | fluffypony: | with node centralisation the small handful of node operators can change the rules at their discretion |
10:22:51 | Jaamg: | i think node centralisation only becomes a problem when it becomes possible to coerce node-runners |
10:22:58 | gmaxwell: | node decenteralization is what keeps miners on good-ish behavior even if miners are too centeralized; not that the latter isn't a serious concern too (e.g. due to the potential for censoring transactions) |
10:23:58 | gmaxwell: | amincd: right the combination of mining centeralization (which we have pretty terribly right now) and node centeralization is not good however. |
10:24:39 | amincd: | what kind of selfish attacks are miners going to do because to low a percentage of Bitcoin users runs full nodes? |
10:24:41 | fluffypony: | Jaamg: you don't even need to coerce them, they'll eventually just come together and say "you know, we're operating the 10 nodes around the world, and we don't get paid for it. Let's institute a small, 2%-of-value fee that gets paid to us for our trouble." |
10:24:46 | gmaxwell: | amincd: lets be clear when you say "that >50% of mining power is not trying to defraud them" you mean "so long as a group of largely anonymous self selecting dozen or so parts (and maybe as few as 4 or 5) are not trying to defraud them". |
10:24:56 | amincd: | *too low |
10:25:08 | gmaxwell: | s/parts/parties/ |
10:25:47 | gmaxwell: | fluffypony: or screwing people in other ways like monitoring transactions and selling the data. |
10:25:54 | fluffypony: | yup |
10:26:28 | Jaamg: | fluffypony: i think we really can't avoid that, we already have the nodes that have that power |
10:26:59 | amincd: | gmaxwell so would hashing power being split amongst a larger number full node running miners address the main security threat that emerges from high node operating costs? |
10:27:09 | gmaxwell: | Jaamg: depends on how cemented it is; in todays world mining is at least fundimentally open access; anyone who wants to pay the energy costs can do it. |
10:27:35 | Jaamg: | gmaxwell: but i meant nodes like coinbase, xapo, bitpay etc... |
10:27:58 | gmaxwell: | Jaamg: even there; their behavior is backstopped by the very real threat of customer flight. |
10:28:01 | fluffypony: | Jaamg: they can't change the rules without forking themselves off the network |
10:28:26 | fluffypony: | I mean, I run in excess of 10 nodes, and none of them can be coerced into changing the rules or selling the data |
10:28:47 | gmaxwell: | amincd: the direct one would be improved; there are indirect ones; e.g. higher validation costs is a pressure (in several respects) for miners (and other nodes) to centeralize. |
10:28:48 | fluffypony: | so if CoinBase decides to do its own thing it won't make the smallest iota of difference to me |
10:29:45 | Jaamg: | fluffypony: where dfo you think users go if coinbase, bitpay, xapo and circle decide to fork their own network, to their network or the one where you have 10 no-coerced nodes running? |
10:30:16 | gmaxwell: | My point was more like if someone shows up and starts saying "Coinbase, you need to block all transactins that we haven't approved of..." coinbase would naturally push back hard because such a move would put them out of business. |
10:30:43 | amincd: | gmaxwell doesn't this become moot if something like GBT gets adopted? in that scenario, hash power contributors can rely on any full node for tx data |
10:31:00 | gmaxwell: | Jaamg: depends on what they were doing? e.g. to compete with bitcoin? or to act as a private clearing system between each other? The latter sounds completely reasonable to me. |
10:31:02 | fluffypony: | Jaamg: there are enough alternatives for merchants and users that it would be self-decimating for them to do so |
10:31:21 | davout: | Jaamg: also nobody cares about 1mn users with a couple of pennies each |
10:32:01 | gmaxwell: | amincd: random nodes don't expose GBT, its a fairly expensive rpc call. and gbt has bandwidth usage linear in the block size; even at 1MB blocks that is a major reason many have refused to use it for remote pools. |
10:33:02 | Jaamg: | fluffypony: exactly |
10:33:33 | amincd: | gmaxwell thank you for the information |
10:34:03 | Jaamg: | davout: i don't see your point |
10:37:31 | davout: | Jaamg: the point is what you see as 'big' entities are not that relevant |
10:38:05 | Jaamg: | not that relevant compared to what? |
10:39:17 | davout: | compared to what is actually needed to push the network one side or the other of a potential hard-fork |
10:41:17 | Jaamg: | nobody knows what is actually needed. my point is that coinbase's node is more significant than the one that say, I, would be running |
10:41:43 | fluffypony: | more significant to its users, sure |
10:41:47 | fluffypony: | not significant to the network as a whole |
10:41:51 | Jaamg: | but this all is now kind of a sidtrack from what I was discussing with gmaxwell earlier |
10:42:10 | amincd: | if mining revenue were to increase proportionally to block size, wouldn't the share of revenues that a pool pays for full node operation remain constant? |
10:42:49 | Jaamg: | fluffypony: exactly |
10:42:53 | amincd: | ->assuming it has the same share of total network hashrate |
10:43:22 | gmaxwell: | amincd: assuming the costs were all linear? then proportionally, but then say you are spending $$$$ now on those costs and by combining with 4 other miners, you can turn that into $.. more profit, hurray. |
10:44:14 | gmaxwell: | so you could centeralize now and cut your validation related costs by the amount of centeralization; but if those costs are small in absolute numbers... why bother. |
10:47:12 | amincd: | gmaxwell perhaps pools perceive everything relatively. For a pool making $1 mill a month, a $100 validation expense is just as significant as a $1000 validation expense for a pool earning $10 mill a month |
11:02:38 | amincd: | Jaamg I saw your response in the thread. my question is: what is the largest blockchain technologically possible? Is it the maximum size at which a typical user can run a full node. or a top 20% user? what determines the right size and why? |
11:13:39 | Jaamg: | amincd: yea, sorry, the argument is a bit vague in that sense. We discussed it a bit earlier here. |
11:15:13 | Jaamg: | I have no answer to what or who should determine it, and I can only vaguely say something like "the limit should be as high as possible but not higher than what makes running a full node with home computer / net impossible" |
11:15:51 | Jaamg: | i think 1MB is definitely below that limit, and I also think that 20MB is below that limit |
11:16:14 | Jaamg: | but maybe the limit still shouldn't be more than 20MB at this point |
11:22:16 | gmaxwell: | Jaamg: what evidence draws you to the conclusion that 1MB is below that limit? |
11:23:02 | gmaxwell: | Trends in node deployment now at the 0.4MB average block size level appear to be evidence that 1MB may be too high. :( (not that I'd seriously suggest reducing it further right now) |
11:24:35 | davout: | Jaamg: i think 1mb is actually too big wrt the UTXO size |
11:26:06 | davout: | even gavinandresen says so, if i understand correctly, the upper bound on UTXO growth per year is sthg like 50gb with a 1mb max block size |
11:26:57 | gmaxwell: | actually somewhat more I think, because in the worst case the utxo growth I think can be somewhat bigger than the block. |
11:27:19 | gmaxwell: | but close enough. |
11:28:52 | gmaxwell: | it's hard to say there are a lot of low hanging things we've been fixing; huge speedups. It may be with all of them deployed the negative trends will reverse. |
11:29:04 | gmaxwell: | and it'll become clear that 1MB is indeed below that level. |
11:30:20 | gmaxwell: | davout: I had suggested several times in these blocksize discussions that if there is any change the new limit should incorporate UTXO size impact. ... but a fundimental problem is that people pushing hard for uppled limits (1) want no limit at all, and (2) don't see any/many risks problems; and so there is no incentive to even discuss things more complex than remove/increase the limit. |
11:30:42 | Jaamg: | davout: yes, I'm looking this more from systemic/incentives perspective. I should not be the one who decides what is "as high as is technologically possible", I just said what is my understanding |
11:31:08 | gmaxwell: | e.g. to incorporate the UTXO impact you get rid of the "size" limit and make it a cost limit, and assign costs to size, utxo increase, and negative cost to utxo decrease... |
11:31:36 | gmaxwell: | likewise the curently pretty braindamaged sigops limit can be turned into cost. |
11:31:58 | Jaamg: | davout: I can't argue against what you just said. if you are correct than I must assume that we are simply closer to Mike Hearn's Crash Landing |
11:32:05 | Jaamg: | *then |
11:32:23 | Jaamg: | i was hoping more was technologically possible |
11:32:35 | davout: | Jaamg: sure, onchain changetip will crash, bitcoin can still be the backbone |
11:35:46 | gmaxwell: | * gmaxwell continues to protest the "crash" comparison; as that wasn't what we saw before; it seems insane to conjecture it now. We can't ever be sure things will be okay... but assuming it will crash is basically an assumption that it can _never_ work, since it can always be overloaded to arbritary levels. |
11:35:58 | Jaamg: | current blockchain can't be the backbone if, say, 5,000,000 new users suddenly want to make a single paper wallet, which i see as a potential crash landing scenario |
11:38:51 | gmaxwell: | 5,000,000 it'll take 8-10 days to claear; similar to not great luck on an international wire transfer. |
11:39:19 | gmaxwell: | doesn't sound like a "crash" to me, esp when you're paying to a piece of paper in a box; which won't tend to mind when the payment takes a while! |
11:39:21 | Jaamg: | it might be rather dissappointing from "future payment system" |
11:39:56 | gmaxwell: | Jaamg: the fact that you don't get instant soft irreversability is a much more immeidate "dissapointment" from a "future payment system" |
11:40:26 | Jaamg: | i agree that crash landing was a bit exaggerated term, in the blog Hearn still described something similar what davout just said |
11:40:40 | gmaxwell: | Bitcoin is a currency and a payment system; the demands of being a secure decenteralized currency make it not very impressive as a payment system by conventional metrics; fortuantely it's powerful enough to securely overlay better payment systems on top (and has other features like smart contracts). |
11:41:23 | gmaxwell: | well on-chain-changetip isn't a think AFAIK; they clear all that stuff offchain on a centeralized system (which is probably reasonable enough for the value levels involved) |
11:43:06 | Jaamg: | yes, but point is that 'crash landing' should not be understood as a total destruction. It just means huge disappointments to many |
11:43:21 | Jaamg: | life goes on but "bitcoin wasn't the future currency/payment system after all" |
11:46:19 | gavinandresen: | gmaxwell: what time zone are you in? |
11:56:23 | gmaxwell: | pacific but wrapping over night and getting only 4 hours of sleep; which I'm overdue for. :( |
11:58:15 | Jaamg: | i guess after that line might be a good point to say big thanks to all you devs for the great work you're doing |
11:59:17 | gavinandresen: | gmaxwell: I took some melatonin last night, FINALLY got 8 hours…. |
12:01:16 | gmaxwell: | yea, thats actually too effective for me, I need like a 0.5 mg dose or I feel like I got hit with a bus the next day... on top of the general bitcoin activity pieter and rusty are in town right now. I'm actually sleeping okay, just due to staying up until I'm exhausted; but having to get up a couple hours later is not so great. :) |
12:02:08 | kanzure: | don't let a timezone stop you |
12:03:00 | gavinandresen: | in college I think I found my natural sleep cycle is something like 10 hours asleep, 18 hours awake…. |
12:03:21 | lmatteis: | lol |
12:03:26 | gavinandresen: | I was spending a lot of time in a dank basement on a big greenscreen terminal |
12:04:06 | gielbier: | i miss those green/black screens. |
12:04:43 | kanzure: | gavinandresen: when you said (in email) you wont reply to issues raised on irc, what did you mean? |
12:05:24 | kanzure: | or rather, your actual statement was something like, "it must be in the form of a pull request" (but that doesn't make sense-- pull requests don't seem like a good option for log delivery) |
12:05:26 | gavinandresen: | kanzure: I don’t know… I probably meant I can’t respond to every issue raised on IRC because I am not on IRC 24/7 |
12:05:51 | lmatteis: | what's your favorite color? |
12:06:00 | fluffypony: | blue |
12:06:41 | kanzure: | gavinandresen: i am curious if you feel the same way about email |
12:07:17 | gavinandresen: | kanzure: if it is an issue with Bitcoin Core, then it should be an issue on github. |
12:07:51 | gavinandresen: | kanzure: if it is an issue with something I wrote on a blog post, then email is fine, but I get a flood of email so don’t expect 100% response |
12:08:03 | kanzure: | do you read all email anyway? |
12:08:13 | lmatteis: | i get 0 email :( |
12:08:20 | gavinandresen: | kanzure: yes, I read all email. I don’t have time to respond to it all |
12:08:39 | gavinandresen: | kanzure: … and I have a pretty aggressive spam filter.... |
12:10:30 | kanzure: | sounds like we should dump all emails from bitcoin-development straight into github issues, then |
12:10:53 | lmatteis: | kanzure: why do you care so much? |
12:11:46 | gavinandresen: | kanzure: if you don’t like the process, then please, suggest a better way. I do also read every message to bitcoin-development mailing list, you can bring things up there. |
12:11:51 | kanzure: | lmatteis: he's basically saying that he can't respond to all possible issues unless they are in the issue tracker |
12:12:10 | gavinandresen: | For general questions, the bitcoin stackexchange is the right place. |
12:12:17 | kanzure: | gavinandresen: er, does that mean you don't think the email2issue idea was good? |
12:12:20 | gavinandresen: | kanzure: I’m not sure what you mean by “issue” |
12:12:45 | lmatteis: | kanzure: what's your point? |
12:12:51 | kanzure: | lmatteis: i was answering your question |
12:13:02 | gavinandresen: | kanzure: … and this is off-topic for bitcoin-wizards, anyway |
12:13:15 | kanzure: | fair enough |
12:13:16 | lmatteis: | yup, we only discuss about magic here |
12:13:49 | fluffypony: | and favourite colours |
12:19:49 | lmatteis: | is anybody experienced in DHTs here? i'm trying to find a simple solution of broadcasting messages in a DHT network by having peers listen on a specific key |
12:20:50 | lmatteis: | however, i'm afraid that has implications with regards to having too much data stored near the decided key |
12:47:41 | nsh: | gmaxwell, sorry i dozed off before seeing your synopsis of the argument against falsifiable assumptions in succinct ZK argument systems |
12:48:31 | nsh: | i does make intuitive sense after a little thought. it's eventually a counting argument so you can gloss the formalism a little without danger |
12:48:38 | nsh: | *it |
12:55:16 | nsh: | can we have a sufficiently high degree of confidence in probabilistically-checkable proofs within zero-knowledge |
12:55:55 | nsh: | argument systems, if there is a restriction in the complexity of the asserted statements in terms of circuit depth? |
12:56:22 | nsh: | for bitcoin-y type applications, we are tending to want to prove relatively simple statements, aren't we? |
12:57:35 | nsh: | static-soundness suffices for zerocoin-likes, i think |
13:36:57 | c0rw1n: | c0rw1n is now known as c0rw|away |
15:08:16 | Mably__: | Mably__ is now known as Mably |
16:23:50 | amincd: | As far as I can tell, with all factors being held equal, as long as mining revenue and scale of economic activity (including number of users) scales linearly with block size, the number of full nodes, and most importantly, the number of full node running miners that are contributing to the network hashrate, should stay constant. |
16:26:30 | Jaamg: | are there any technical reasons why certain number of nodes is needed? |
16:26:51 | amincd: | if 1 billion people want to use Bitcoin, we can either have a hard limit that forces most of them to move BTC on the chain through intermediaries, but allows most of them to still audit the chain themselves, or the converse of allowing most of them to move BTC with their own private keys, but makes most of them dependent on others for auditing |
16:26:51 | Jaamg: | i'm thinking something like will slow propagation / latency become an issue |
16:31:28 | amincd: | Jaamg the main assumption is that Bitcoin security requires every user to be able to run their own full node. But I would argue that there's a case to be made that as long value density of tx data remains constant, systemic security shouldn't suffer. Propagation |
16:32:45 | davout: | that doesn't make much sense, either you can check the full transaction history independently, or you don't |
16:33:53 | amincd: | I don't see any theoretical limits to bandwidth, that would prevent propagation of lot of tx data |
16:34:44 | amincd: | davout if you can't audit the chain yourself, your own security suffers, but necessarily systemic security |
16:35:09 | davout: | you mean 'not necessarily', right? |
16:35:16 | Jaamg: | amincd: i agree that it's crucial that "anybody" can become a Bitcoin node at will. It will not be Bitcoin anymore if this property is taken away |
16:35:33 | davout: | either way, 'systemic security' is quite nebulous |
16:35:45 | davout: | and therefore irrelevant |
16:36:59 | Jaamg: | but i was thinking of situations where perhaps nodes around the globe are too sparse which could cause problems. I don't know if this is possible, that's what i was asking |
16:36:59 | kanzure: | heh an amusing problem for all of the proof-of-publication stuff: "Does a 60 year old archive of a block-chain that "lived" for 15 years of any value? It could probably be "faked" with a cluster of machines and some time?" (the answer is yes) |
16:39:20 | amincd: | Jaamg, but even if 'anybody' can become a Bitcoin node, they anybody can't directly 'use' Bitcoin, by moving BTC on the chain with their own private keys. So one way or another, 'full access' to Bitcoin under a scenario with 1 billion users is limited to the rich |
16:40:57 | amincd: | Better that the masses can move BTC with their own private keys, and trust the full nodes doing mining, then have no way to move BTC on the chain themselves, and have to trust bank like entities |
16:43:34 | amincd: | End user claims have to be stored on off-chain ledgers, controlled by TTPs, with a block size that doesn't meet global demand, meaning auditing the chain yourself is pointless for the average user who can't afford to move BTC on the chain with their own private keys |
16:45:14 | amincd: | the issue is one of systemic risk IMO, and that hasn't been convincingly shown to suffer as block sizes increase in proportion to mining-revenue/scale-of-economic-activity |
16:46:46 | amincd: | favour yes I meant 'not necessarily' |
16:46:49 | Jaamg: | the systemic risk i see with too big blocks is if running a full node becoming too expensive or otherwise impossible |
16:47:23 | amincd: | autocorrect. favour = davout |
16:47:57 | Jaamg: | i don't think we are anywhere near that point, but say, if there wasn't any limit then this could happen if blockchain transactions become very popular |
16:48:14 | amincd: | Jaamg, going back to my earlier point: As far as I can tell, with all factors being held equal, as long as mining revenue and scale of economic activity (including number of users) scales linearly with block size, the number of full nodes, and most importantly, the number of full node running miners that are contributing to the network hashrate, should stay constant. |
16:49:05 | skeebop: | honestly if we build the infrastructure right 7 tps might just be fine |
16:50:40 | skeebop: | suppose we only needed bitcoin for things on par with SWIFT settlements. SWIFT does something like 200 tps. with payment channels, we can probably bring that down quite a bit. there should be very little reason ever to publish to a blockchain unless you're opening a channel or closing due to conflict |
16:50:47 | Jaamg: | amincd: i don't see why we should care about the number of full nodes as long as it's possible for anybody to launch a node |
16:51:08 | amincd: | I don't think the fundamental security properties of Bitcoin degrade as long as the average value density of transaction data |
16:51:18 | Jaamg: | and unless there are any technical reasons (maybe issues with too slow propagation etc) |
16:51:19 | amincd: | remains constant. |
16:51:56 | PRab_: | PRab_ is now known as PRab |
16:52:45 | amincd: | Jaamg I don't see why we should care about 'anybody can launch a full node' more than 'anybody can move BTC on the blockchain with their own private keys' |
16:53:19 | Jaamg: | amincd: i think that's the same thing expressed in two different ways |
16:53:48 | skeebop: | its not quite. long term it can become too expensive for the average fella to transact on chain. but it ought NEVER be too expensive to validate the chain on commodity hardware |
16:54:29 | amincd: | Jaamg not following you. You can move BTC on the blockchain with your own private keys without running a full node, by using SPV. Better to trust the mining collective than a bitbank |
16:55:16 | Jaamg: | amincd: oh yeah, now i see |
16:55:19 | Jaamg: | wait a sexc |
16:55:21 | amincd: | skeebop why is the former less important than the latter? |
16:55:22 | Jaamg: | sec :) |
16:56:31 | skeebop: | because micropayment channels will suffice |
16:56:49 | skeebop: | and realistically we will not be able to keep tx fees down given PoW drawbacks and the inflation taper |
16:56:52 | Luke-Jr: | amincd: what good are your own private keys, if you're trusting someone else to tell you what money you have/control with them? ;) |
16:57:18 | skeebop: | so as long as anyone can validate, its fine to trust third parties for setting up channels and so on because you can check up on them |
16:57:30 | Jaamg: | amincd: what Luke-Jr said |
16:57:40 | amincd: | skeebop, at some adoption levels, they do not. If the Lightning Network was globally adopted, most people would have to use bit banks, with current block sizs |
16:57:45 | skeebop: | slash what Luke-Jr said lol |
16:58:02 | skeebop: | and thats fine, because they can validate/monitor the bitbanks |
16:58:03 | Luke-Jr: | amincd: Lightning makes it easier to *not* use banks |
16:58:10 | skeebop: | and anyone can start one |
16:58:36 | skeebop: | well but you still need funding txs. and if its too expensive the bitbanks will help you set those up |
16:59:08 | amincd: | Luke-Jr Lightning Network requires 130 MB blocks for every person on Earth to use. therefore everyone won't be able to use if we don't raise the block size |
16:59:18 | skeebop: | lol what?! |
16:59:21 | Luke-Jr: | amincd: uh, compare apples to oranges? |
16:59:31 | skeebop: | you mean if everyone opens their channel all at once .... |
16:59:32 | Luke-Jr: | 130 MB is far better than 1 TB (non-Lightning) |
16:59:41 | skeebop: | I'm still pushing for 1MB |
16:59:43 | Luke-Jr: | skeebop: "for every person on Earth" |
17:00:14 | Luke-Jr: | if we bump up against 1 MB with Lightning, I'm all for raising it.. |
17:00:42 | amincd: | Luke-Jr what good is auditing the chain when your claim to BTC is held on a TTP ledger and only as good as ability of the TTP to honour it |
17:00:45 | skeebop: | ok so for every person on earth to open a payment channel on bitcoin with 7 tps will take 4.5 years |
17:00:50 | skeebop: | totally legit :) |
17:00:57 | Luke-Jr: | amincd: TTP? |
17:01:14 | amincd: | Trusted Third Party, e.g. a bitbank |
17:01:21 | skeebop: | who said anything about their ledger? |
17:01:24 | Luke-Jr: | amincd: I don't advocate that. |
17:01:38 | Luke-Jr: | amincd: Lightning is not TTPs |
17:02:09 | Luke-Jr: | it uses UNtrusted third parties |
17:02:49 | amincd: | skeebop, everyone on Earth can't use the Lightning Network with the current block size. most people would need to access it through intermediaries, meaning they would have their claim to BTC stored off-chain on the ledger of TTPs |
17:03:10 | skeebop: | sure they can! itll just take 4.5 years to open a channel for each one ... |
17:03:18 | skeebop: | 1000000000/7./60/60/24/365 |
17:03:52 | amincd: | Luke-Jr everyone on Earth can't use the Lightning Network if we assume the block size has to be small enough to allow everyone to run a full node |
17:04:12 | skeebop: | oh shit theres 7billion not 1. ok so 30 years. maybe not as great |
17:04:19 | Luke-Jr: | amincd: this is not true |
17:04:36 | Luke-Jr: | 130 MB blocks will be reasonable long before everyone on Earth has technology |
17:04:37 | amincd: | 130 MB.. |
17:05:06 | skeebop: | ya I'll give you max 2MB |
17:05:11 | skeebop: | :D |
17:06:06 | amincd: | Luke-Jr if everyone on Earth decides they want to use Bitcoin tomorrow, they would have to use bitbanks, whether the LN is in place or not, if we assume that the only acceptable scenario is one where the average user has to be able to run a full node |
17:07:06 | Luke-Jr: | everyone on Earth WON'T decide they want to use Bitcoin tomorrow |
17:07:08 | amincd: | it's not a sound principle IMO |
17:07:08 | skeebop: | fortunately for us, every one on earth will not want to use bitcoin tmrw. lets start opening payment channels for those who do asap |
17:08:15 | amincd: | the fact that it doesn't work in theory shows the principle underlying it is unsound. the principle should work under any adoption level |
17:08:47 | skeebop: | mmm no it shouldnt. this is practical engineering not theoretical |
17:10:50 | amincd: | skeebop the insistence on limiting tx data so that the average person can run a full node is not justified by practical engineering IMO |
17:11:15 | skeebop: | ah fair point. but thats one of the design criteria' |
17:11:33 | skeebop: | the client demands, at all costs, that the average person can run a full node |
17:11:42 | skeebop: | now go be a practical engineer around that ;) |
17:12:02 | amincd: | skeebop and I'm arguing that it's quite possibly an unsound design criteria |
17:12:18 | skeebop: | unsound meaning what? |
17:12:37 | amincd: | meaning that it doesn't maximise utility for the world |
17:12:40 | skeebop: | impossible? untenable? dumb? not necessary? |
17:13:27 | amincd: | better to allow more txs to be generated as demand for txs increases |
17:14:01 | skeebop: | well i think its arguable. theres no really nice way to quantify the tradeoff |
17:14:06 | amincd: | as long as txs meet the value/fee requirements |
17:15:06 | skeebop: | i think it would do us well (and maybe even maximize utility) to be as creative as possible within these constraints before releiving them |
17:15:14 | amincd: | skeebop thought experiments like 'everyone on Earth wants to use Bitcoin, what do we do' help inform whether the trade-offs are right |
17:15:49 | skeebop: | undeniably. but the answer shouldn't be "sacrifice key design criteria" |
17:16:33 | amincd: | the design criteria should make the optimal tradeoffs. It shouldn't be defended blindly. |
17:16:55 | skeebop: | the challenge is to make the optimal tradeoffs given the design criteria |
17:17:26 | amincd: | we need the right design criteria.. the current one is very possibly not right IMO |
17:19:11 | skeebop: | bitcoin is like science. we want people to be able to use it and independently validate it. explosion in the former has crippled the latter, arguably to our detriment |
17:19:40 | skeebop: | thinking especially in biomedical sciences |
17:20:14 | amincd: | well, that assumption doesn't seem right to me, for reasons I explained earlier |
17:20:38 | skeebop: | ok i dont want to get carried away on the analogy cuz itll break down soon. but you really suppose sacrificing the "anyone can verify" property is worth it? |
17:21:15 | amincd: | skeebop yes absolutely. Verifying it is useless if you can't store your tx on it |
17:21:33 | skeebop: | personally i'd rather see it done with atomic-swaps to alt-chains + payment channels rather than bumping the block size too soon |
17:22:18 | skeebop: | so maybe slow and steady growth in blocksize is warranted. but this protocol is so damned precarious I can't help feel itll crumble over itself |
17:23:16 | amincd: | the idea that tx data should be limited to ensure everyone can verify, when the same limit ensures everyone can't store, is unsound, regardless of efficiency improvers like the LN and sidechains |
17:24:02 | skeebop: | everyone can store, we just need to be more creative ;) |
17:24:24 | amincd: | skeebop any reason for your belief that'll crumble with more tx data (higher operating node costs) ? |
17:24:24 | Jaamg: | amincd: it's not about 'everyone can verify' it's about not needing to trust other party if you want to broadcast |
17:25:12 | amincd: | Jaamg, but with limits on tx data, everyone can't broadcast txs anyway |
17:26:07 | skeebop: | ya so higher node costs but thats separate, i mean unpredictable side effects from a seemingly innocuous change |
17:26:26 | skeebop: | software bugs are hard :'( |
17:26:27 | amincd: | Jaamg better to be able to create your own txs and trust miners and the sample of full nodes you poll, then not be able to broadcast your own txs period |
17:26:49 | skeebop: | but the fees are going to be so high that you'd be crazy to do that anyways ... |
17:27:05 | skeebop: | you know, unless we move to proof of stake, but I hear you can get shot for speaking such blasphemy around here |
17:28:04 | amincd: | skeebop tx fees don't need to increase for total mining revenue to scale linearly with block size. As for PoS, it can't establish secure distributed consensus |
17:29:25 | Jaamg: | amincd: we have limit now and everyone can broadcast |
17:29:39 | skeebop: | so fees right now are a couple orders of magnitude below inflation so not sure how you expect revenue to scale without jacking fees. and PoS can, please don't cite the Polestra paper, let's not get into it now ;) |
17:31:09 | amincd: | Jaamg, under the design goal of one person = one node, this breaks down at some adoption levels. |
17:32:25 | Jaamg: | amincd: yes, i think we should have as big max block size as possible while still keeping the property 'anybody can become a node' |
17:32:59 | Jaamg: | if transactions hit that limit some day we have a positive problem |
17:33:38 | amincd: | Jaamg I don't think one person = one node makes sense, for reasons I provided. I guess we can go in circles forever on this |
17:33:49 | Jaamg: | or ehm, maybe not positive but more positive than "should we increase the 1MB limit" |
17:34:33 | Jaamg: | amincd: it doesn't need to be one person = one node, the point is that the possibility must be there |
17:35:46 | amincd: | Jaamg Yeah I understand that. I disagree that this is the optimal design goal, for ressons I provided |
17:36:12 | Jaamg: | amincd: it doesn't need to be 1 person = 1 newspaper, but there must be the possibility for anybody to start a newspaper |
17:38:28 | amincd: | everyone being able to create and store a tx on the Bitcoin blockchain, if it does not create systemic risks, and I don't see any compelling arguments that it does |
17:38:45 | amincd: | .. is essential |
17:41:54 | jgarzik: | amincd, OK. So - we need a network that supports... 1B mobile phones? |
17:42:03 | jgarzik: | Must quantify, not wave hands in the air. |
17:42:31 | jgarzik: | Paypal rate is 130 tps. VISA is 2000 tps. |
17:43:20 | jgarzik: | In the bitcoin system, a single computer needs to be able to process 100% of the worldwide traffic. |
17:43:59 | arubi_: | arubi_ is now known as arubi |
17:45:09 | amincd: | jgarzik, if it doesn't introduce systemic risk, I don't think there should be any limit to what the network supports. If the world demands 1 million tps, and is willing to pay fees for them, the network should support it, if it's not found to be systemically unsound |
17:45:58 | amincd: | jgarzik why does a single a computer need to be able to validate global traffic? |
17:46:11 | jgarzik: | amincd, Because that is how bitcoin is designed |
17:46:32 | amincd: | not according to Satoshi's original description |
17:46:36 | jgarzik: | It's nice to be able to wave a hand and say "it should do X" but that is not how reality works. |
17:46:46 | jgarzik: | amincd, Yes, according to Satoshi's description |
17:47:03 | jgarzik: | amincd, Every bitcoin node validates 100% of the blockchain traffic, because it does not trust other nodes. |
17:48:06 | jgarzik: | amincd, In your system, that means every computer must handle 1 million tps all by itself, as well as copying 1 million tps to several other nodes, as well as serving data to 1 billion mobile phones |
17:48:21 | jgarzik: | Run the math on that |
17:48:33 | amincd: | https://bitcointalk.org/index.php?topic=532.msg6306#msg6306 |
17:49:20 | amincd: | jgarzik every computer being able to validate global traffic wasn't an origins design goal of Bitcoin |
17:49:33 | amincd: | *original |
17:51:34 | jgarzik: | amincd, full nodes being able to validate 100% of global traffic was and is design goal of bitcoin |
17:52:07 | jgarzik: | amincd, lightweight clients are in a separate category, referred to in this conversation as mobile phones - non validating nodes which are more populous |
17:52:23 | amincd: | jgarzik the quote by Satoshi that I linked to directly contradicts your claim |
17:53:00 | jgarzik: | amincd, no it doesn't. I just explained the context. |
17:53:22 | amincd: | The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate. |
17:53:44 | jgarzik: | re-read what I just wrote |
17:54:29 | amincd: | jgarzik I did and, with all due respect, it's not an accurate description of Satoshi's statement. |
17:55:30 | jgarzik: | amincd, sigh, You need to learn more about bitcoin |
17:55:32 | amincd: | "The more burden it is to run a node, the fewer nodes there will be." |
17:55:46 | amincd: | it doesn't get any plainer than that |
17:56:22 | jgarzik: | Each node in that last sentence is a full node, each of which independently processes 100% of the worldwide traffic. |
17:57:01 | amincd: | no it isn't. Satoshi meant SPV by "client nodes" |
17:57:46 | jgarzik: | "The more burden it is to run a node, the fewer nodes there will be." <<-- refers to full nodes |
17:57:59 | jgarzik: | " The rest will be client nodes" <<-- refers to SPV nodes, non validating |
17:58:17 | amincd: | jgarzik yes exactly |
17:59:57 | jgarzik: | A system of 1 million tps requires that each and every full node independent process 1 million tps, and then relay 1 million times X to other nodes. |
18:00:04 | jgarzik: | *independently |
18:00:30 | amincd: | jgarzik, yes |
18:00:39 | jgarzik: | That narrows the ability to run a full node down to.... 1 or 2 in the world. |
18:00:45 | jgarzik: | Which is even more centralized than VISA |
18:01:05 | jgarzik: | Easily controlled by big instituations, even more so than the US Dollar |
18:01:31 | jgarzik: | You have reinvented centralization, at which point a client-server model would be cheaper and more efficient. |
18:01:49 | jgarzik: | The exact opposite of egalitarian access |
18:41:44 | amincd: | jgarzik 1 million tps is 3 would be probably entail global adoption, and an economy that i |
18:42:18 | amincd: | 1 million tps is 31 trillion txs per year |
18:42:35 | amincd: | currently there are 3 trillion tpy |
18:43:47 | amincd: | so in this scenario, when Bitcoin has a market 10 times bigger than the entire current world economy, there would be many more big players that could run full nodes |
18:45:05 | amincd: | there's no reason to assume full nodes will decline as tx volumes will increase, as long as tx volumes scale linearly with scale of economic activity on the Bitcoin network, and mining revenue |
18:48:24 | Jaamg: | why should anything be assumed one way or the other regarding the number of full nodes? |
18:53:19 | amincd_: | Currently 3 trillion txs per year: http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-Digital-Payments-Transformation-From-Transaction-Interaction.pdf |
18:55:58 | amincd_: | jaamg the idea that it must be possible for an average computer to validate 100% of global Bitcoin traffic rests on the idea that the number of independent parties running full nodes declines with tx volume. jgarzik suggests that at 1 million tps, there'd only be one or two full nodes. I'm disputing that there is any compelling reason to believe that would be the case |
19:01:21 | amincd_: | 1 million tps is 1-2 GB/s. Not necessarily out of reach for hundreds of thousands of parties in the event that Bitcoin usage outsizes all current means of transacting by a factor of 10. This would be a scenario where Bitcoin is orders of magnitude more important than the major financial institutions like Goldman Sachs, so dedicated fiber for full nodes is conceivable |
19:05:43 | amincd_: | To put another way, at 1 million tps, Bitcoin would be 10X more important than the entirety of the current global financial and monetary system, central banks and treasuries included. This would create significant demand for running these super nodes. There'd be more than one or two. |
19:07:10 | sipa: | i do not think this is relevant |
19:07:20 | sipa: | the number of full nodes is a distraction |
19:07:29 | Jaamg: | i'm thinking the exact same thing |
19:07:41 | sipa: | it is interesting as a means to detect how easy it is to run one |
19:07:59 | sipa: | but the relevant metric is who you give access to validation |
19:08:07 | sipa: | not how many people actually do |
19:08:16 | amincd_: | an average computer being to run one is an arbitrary design requirement. Why not the average mobile phone. |
19:08:29 | sipa: | yes, it is arbitrary |
19:08:42 | sipa: | but it is the most important question we need to answer |
19:08:59 | sipa: | it is a compromise between validation scalability and access scalability |
19:09:19 | sipa: | thrre is no optimal vslue for this |
19:09:24 | sipa: | but it is a choice |
19:09:42 | amincd_: | sipa: it is indeed the most important question, but IMO, it hasn't been discussed enough. I don't even see a compelling reason to ensure that those without access, are able to validate |
19:10:16 | sipa: | i think there is |
19:10:21 | amincd_: | sipa: there is an optimum for net utility. We just have no way to know for certain what it is |
19:11:00 | amincd_: | if you don't have access, your txs are stored in off-chain ledgers, that are only as good as the willingness and ability of the TTPs to honor them. You can validate the main chain, but you can't validate the TTP's ledger |
19:11:04 | sipa: | even if for some reason only big banks were able to transact, i think the only benefit bitcoin could offer is the ability for outsiders to vdlidate they are not cheating |
19:11:11 | sipa: | and cheating is the wrong word |
19:12:03 | sipa: | simply the fact that one needs to cinvince the population of full node using users, is the dtrongest reason why the rules they demand of the system remain maintained |
19:14:20 | gavinandresen: | gmaxwell : I’m thinking of creating a spreadsheet with your proposed dynamic fee algorithm, so I can get a feel for how subsidy, hash rate, price, fees, etc work. You haven’t already done that, have you? |
19:14:39 | amincd_: | whether a Bitcoin network only directly accessible by big banks is validated by just the big banks, or every person in the world, makes very little practical difference in my opinion. A few thousand super nodes validating that the other super nodes are acting honestly is going to be effective. Everyone else can reliably find out if the banks are acting within the rules of the protocol. |
19:15:22 | sipa: | amincd_: i agree the value is higher for people participating |
19:15:35 | sipa: | especially those actually receiving transactions |
19:16:35 | amincd_: | so then, my reasoning goes, access, and systemic security (a large enough number of independent parties running full nodes), should be the priority, not universal validatability. |
19:21:19 | Jaamg: | i think universal validatability is a distraction |
19:21:52 | Jaamg: | it's about freedom to participate |
19:22:11 | Jaamg: | we talked about this earlier |
19:22:23 | gmaxwell: | gavinandresen: no-- (I did simulate some older one at one point; but not the current one). It certantly does need to be done; the formula might need some constant factors twiddled to get sane behavior. |
19:23:02 | amincd_: | Jaamg "freedm to participate" is vague. The world cannot participate in creating main txs if the block size is limited to allow a typical computer to run a full node |
19:23:14 | amincd_: | *main chain |
19:23:30 | Jaamg: | amincd_: it is currently limited and we currently have freedom to participate |
19:24:18 | amincd_: | Jaamg: we had this exchange before. The current design principle fails to allow full participation in generating txs at higher adoption levels. |
19:24:49 | amincd_: | It's not future proofed |
19:24:59 | Jaamg: | if we hit the max block size regularly, users get disappointed (eg. "crash landing scenario") and go do other things |
19:25:10 | Jaamg: | then there is room in the blockchain again |
19:26:14 | Jaamg: | to avoid this scenario i suggest that we keep the max block size as big as possible |
19:26:17 | amincd_: | Jaamg: that's true |
19:26:31 | Jaamg: | that's why i like gavinandresens proposal to increase max block size |
19:27:19 | sipa: | define "as possible" |
19:28:27 | sipa: | i don't believe in the existance of such a number |
19:28:41 | sipa: | there is a max block size above which things just stop working |
19:29:09 | sipa: | but the number above which the chain becomes insecure or uninteresting is a question of tradeoffs |
19:30:28 | Jaamg: | sipa: i think it can only be expressed vaguely like that. i think it shouldn't be bigger than what makes running a node with regular home pc impossible |
19:31:03 | sipa: | that's a meaningful choice, but it is a choice |
19:31:13 | gavinandresen: | gmaxwell: I especially want to do some calculations around the subsidy halving, because having it suddenly become much more or less attractive to build bigger blocks is undesireable. That’s what made me start to think that adjusting the upper limit at the subsidy halving might make sense…. |
19:31:51 | amincd_: | sipa: the most compelling argument I can see for a small block size is that resistant to government censorship might not scale with economic power, so a network with 5,000 super nodes, handling $1 trillion in economic activity, wouldn't be as resistant to censorship as a network with 5,000 consumer-grade nodes handling $10 billion in economic activity. However, this is merely speculation. |
19:32:02 | amincd_: | *resistance to |
19:32:08 | sipa: | agree |
19:35:48 | sipa: | amincd_: i have to agree that the most important question is validatability by the participants |
19:36:30 | sipa: | amincd_: but i believe the participants should also have the right to avoid evolution towards them becoming unable to participate |
19:36:53 | sipa: | in both ways (access and validation) |
19:36:58 | gmaxwell: | gavinandresen: Good thing to think about-- the motivation for the original scheme had actually been "in the limit" with no subsidy; e.g. it's really supposted to be something where picking some size gives you an optimal fee income; and the optimal size is less than infinity (but larger than average if there is a backlog of good fees). Mark and I spent some time talking about omitting the subsid |
19:37:05 | gmaxwell: | y from the effect (e.g. by increasing the subsidy when you make a higher diff block) but thats complex and hard to make people confident of; we realized that leaving it in is no big deal, it's just a bit of extra pressure against bigger blocks that goes away over time. |
19:37:09 | gmaxwell: | gavinandresen: but a sudden change may be weird. |
19:38:06 | gavinandresen: | gmaxwell: so you imagine some fixed upper limit that never changes? Like 11 gigabytes ? |
19:38:55 | amincd_: | sipa: I think 'Bitcoin's global traffic must be validatable by a typical computer' makes a decision that participation is less important than validatability |
19:39:18 | sipa: | amincd_: agree |
19:39:52 | amincd_: | so maybe we should re-examine that, and consider evolution toward super nodes as one possible future |
19:46:58 | Jaamg: | what i meant by 'freedom to participate' was that running a node should be possible to 'anybody'. Maybe this actually is what you ment by validatability earlier. |
19:47:30 | Jaamg: | i probably misunderstood, sorry |
19:50:59 | sipa: | running a node by someone who doesn't actually rely on the result of the validation it provides is not very useful |
19:51:14 | sipa: | those nodes can be forked ofd without being noticed |
19:58:05 | kanzure: | sipa: i have noticed you mentioning that lately. i wonder if there is a good physical quantification or measurement of whether a node is being used in a meaningful way? |
20:04:55 | Transisto: | Transisto is now known as testtt |
20:09:49 | testtt: | testtt is now known as Transisto2 |
20:45:32 | rusty: | rusty has left #bitcoin-wizards |
23:52:13 | dgenr8: | not very useful, yes. and if the world's 100000 tps were all denominated in bitcoin, but were consolidated down to 7 tps in the blockchain, none of the consolidated traffic would be meaningful to any average validator |
23:54:16 | phantomcircuit: | kanzure, there's no way to tell |
23:54:44 | phantomcircuit: | dgenr8, actually it would be meaningful for everybody then |
23:55:16 | dgenr8: | it's a bunch of huge settlement trades between big banks |
23:55:32 | kanzure: | phantomcircuit: i don't mean remotely |
23:55:49 | kanzure: | i mean local |
23:56:55 | phantomcircuit: | kanzure, not really since you can interface with a trusted node over the p2p protocol |
23:57:34 | phantomcircuit: | dgenr8, im not sure settlement is the right word there though (or banks either) |
23:57:50 | kanzure: | the word "settlement" is overloaded in finance to mean many many things |
23:58:38 | phantomcircuit: | kanzure, yeah that's part of why i dont think it's the proper term to use |
23:58:44 | phantomcircuit: | it's confusing as to what you mean exactly |
23:59:02 | phantomcircuit: | dgenr8, take for example lightning hubs when the micropayment channel expires |
23:59:15 | phantomcircuit: | those transactions can be validated as correct by the participants |
23:59:30 | phantomcircuit: | thousands of participants could potentially now be interested in that one transaction |
23:59:48 | dgenr8: | i was thinking "trades" is the most off ... transfers is better. settlement is accurate -- it's a net +- amount summarizing and finalizing off-chain activity. for banks, whatever .. "entities" |