00:06:40 | mike4: | mike4 is now known as c--O-O |
01:11:16 | EasyAt_: | EasyAt_ is now known as EasyAt |
01:48:49 | licnep__: | licnep__ is now known as licep |
01:48:53 | licep: | licep is now known as licnep |
01:52:30 | gmaxwell: | gmaxwell is now known as gmaxwell_ |
01:53:15 | gmaxwell_: | gmaxwell_ is now known as gmaxwell |
01:56:17 | phantomcircuit: | * phantomcircuit waves @ gmaxwell |
02:12:31 | kanzure: | hello |
02:13:08 | kanzure: | these guys are trying to slap in a credit system on top of "proof of storage", |
02:13:11 | kanzure: | https://groups.google.com/forum/#!topic/maidsafe-development/W-n-IQ_TUis |
02:13:16 | gmaxwell: | hi. so yea, I dunno how they can provide what they're saying there. |
02:13:35 | kanzure: | is there a proof that "proof of storage" is unachievable? i am sure some academic monkeys might have considered this. |
02:14:12 | kanzure: | i suppose really the problem is the introduction of a credit rather than the concept of using hashes for proofs |
02:14:21 | gmaxwell: | As I pointed out in #mtgox-chat you can compatly prove that you know some data, but ... right. |
02:15:16 | amiller_: | hi kanzure! i remember you from the jstor browser downloader thing :) |
02:15:24 | kanzure: | i enjoyed reading https://bitcointalk.org/index.php?topic=310323.0 |
02:15:39 | kanzure: | yeah i was just about to start complaining about your https://gist.github.com/amiller/7131876 |
02:15:53 | kanzure: | gmaxwell: oh yeah, i didn't realize you were the same gmaxwell that did the jstor torrent dump |
02:16:15 | gmaxwell: | well lets see.. hash tree your data ... encode it with a long error correcting code (e.g. for the amount of redundancy you want network wide.), now encrypt it. |
02:16:18 | kanzure: | gmaxwell: https://github.com/kanzure/pdfparanoia (actually, jstor implemented a workaround recently, so i need to go reverse engineer it...) |
02:16:59 | gmaxwell: | kanzure: for that dump they were super easy to extract, e.g. you could just pdf to images and then discard the color layers. :) |
02:17:25 | gmaxwell: | Now no two nodes should have the same data. So you might be able to graft a compesnation scheme onto that, at least so long as its the original storing party paying. |
02:17:28 | kanzure: | my approach is based on flate decoding and their position on the page |
02:18:00 | kanzure: | (i remove the actual pdfobject from the stream or w/e; in fact, i somewhat break the pdf file format by not mending the xtable table) |
02:18:17 | gmaxwell: | but you could still have all the users dump all the data onto a single real machine, so your redundancy might be ficticious... but they wouldn't save any storage due to that. |
02:18:32 | gmaxwell: | downside in that case is that the network can't heal without your help. |
02:18:46 | kanzure: | correct, but if you reward credit based on what the nodes report, then those colluding centralized nodes earn a greater amount of credit i think |
02:19:16 | kanzure: | they tried to make a handwavy comparison to mining :) it is cute: http://blog.maidsafe.net/2014/02/18/token-on-maidsafe-network/ |
02:19:29 | gmaxwell: | yea, if its something other than the storing party paying then you're just hosed, because I can just make my data the digits of SHA256(secret||hash) so anyone can statelelessly simulate storing it. |
02:20:33 | gmaxwell: | wow there is a lot of completely confused gibberish on that page. |
02:20:41 | kanzure: | you know, i didn't even notice the watermarks were removed in your dump |
02:20:53 | andytoshi: | gmaxwell: can you decipher what maidsafe actually is yet? |
02:20:54 | kanzure: | torrents don't work as a mechanism for distributing science content. libgen has 7000 torrents that nobody seeds. |
02:21:24 | gmaxwell: | kanzure: They weren't removed, actually I published the watermarked copies, to make it clear that they were from JSTOR. |
02:21:29 | kanzure: | andytoshi: it just seems to be wrappers around generic storage concepts |
02:21:31 | gmaxwell: | (but I had previously removed them) |
02:21:35 | kanzure: | gmaxwell: oh i mean, watermarks like IP addresses? |
02:21:45 | kanzure: | andytoshi: https://github.com/maidsafe/MaidSafe/blob/master/tools/drive_test.py |
02:21:50 | kanzure: | andytoshi: https://github.com/maidsafe/MaidSafe/blob/master/tools/vault.py |
02:21:59 | gmaxwell: | kanzure: oh no, those weren't there. I left the logotype in. :) |
02:22:07 | kanzure: | i see |
02:22:13 | gmaxwell: | (they weren't originally there either, because reasons.) |
02:22:26 | kanzure: | i recently met someone who was ex-jstor |
02:22:38 | kanzure: | he was happy to spill the beans about terrible internal architecture stuff :) |
02:22:49 | gmaxwell: | kanzure: torrent is terrible, but it's an okay bulk method to get data to other people who will make it available via better mechenisms. |
02:23:07 | kanzure: | right- specifically i mean the social adoption around torrents, not the protocol itself |
02:23:10 | kanzure: | nobody seeds etc etc |
02:23:19 | gmaxwell: | yea. |
02:23:34 | gmaxwell: | the torrent webseed stuff was basically junk last I checked too. |
02:24:13 | kanzure: | it's a hard problem to solve. let's just pretend a certain someone has multiple terabytes of extremely high-quality pdfs. now what.. there's basically no humanitarian org that will touch it (argh). |
02:24:23 | kanzure: | internet archive is generally uninterested |
02:24:25 | tt_away: | ethereum was proposing data storage contracts based around doing verification of merkle trees containing the data, i think they're in the ethereum whitepaper. |
02:27:53 | jcorgan: | gmaxwell: what was it you didn't like about the concept of a torrent web seed? |
02:31:15 | gmaxwell: | jcorgan: its a great concept, but at least when I last tried they basically didn't work— required server considerations and then some/many/most torrent clients wouldn't actually use them. Maybe its improved. |
02:34:59 | jcorgan: | we distribute the GNU Radio dvd ISO files via torrent; it has a web seed off our server, seems to work well enough (no complaints), and usually we only have 4-5 seeders or less. |
02:35:46 | jcorgan: | i was just curious about your comment as this is the only experience i have with web seeded torrents, not a big torrent user myself |
02:36:56 | gmaxwell: | may be working better now then, my expirence is dated by a few years. |
02:39:11 | jgarzik: | Web seed is just a slack way of getting around not running a torrent daemon. But it seems like, if you have a 24/7 web server, you can likely figure out how to set up a 24/7 torrent seed. |
02:39:32 | jgarzik: | Web seeding is definitely broken or odd in several clients |
02:40:03 | gmaxwell: | jgarzik: if it worked super well it would allow you to use a regular true HTTP only CDN, where you couldn't necessarily run a full time seed. |
02:40:07 | kanzure: | if you are running a web server then users will probably pester you to just dump the data over HTTP |
02:40:21 | jgarzik: | gmaxwell, no argument |
02:40:37 | jgarzik: | but it never worked well, so nobody really uses it or demands good support, so it never works well |
02:45:58 | just[dead]: | just[dead] is now known as justanotheruser |
02:47:07 | phantomcircuit: | jgarzik, iirc the only client that ever supported it correctly was... something that started with an s |
02:47:14 | phantomcircuit: | i cant even remember its been so long |
02:47:37 | phantomcircuit: | shareaza |
02:47:55 | jgarzik: | the DHT experience was disappointing, too |
02:48:23 | maaku_: | * maaku_ would participate in a #bittorrent-wizards channel |
02:48:24 | jgarzik: | major clients like Vuze have their own torrent DHTs, and thus, their own islands separate from the rest of the world |
02:48:33 | phantomcircuit: | jgarzik, it takes thousands of peers before dht works well |
02:48:53 | jgarzik: | phantomcircuit, that was not my experience |
02:49:10 | phantomcircuit: | jgarzik, DHT + peer exchange can merge the islands |
02:49:24 | phantomcircuit: | but you need ~20 people from each of the different clients |
02:49:44 | jgarzik: | phantomcircuit, where the DHT was the mainline DHT, and was enabled, and was compiled into the client... it worked. unfortunately all those conditions narrowed the client list dramatically |
02:49:50 | phantomcircuit: | generally speaking to get that kind of distribution requires a significant multiple of 20 |
02:50:01 | gmaxwell: | DHT regardless of the client seems to basically not work for small numbes of users. |
02:50:13 | phantomcircuit: | also oh shit accidentally walked onto a security issue |
02:50:17 | phantomcircuit: | (not in bitcoin) |
02:50:19 | phantomcircuit: | sigh |
02:50:46 | phantomcircuit: | incompetence everywhere tooo the max! |
02:50:55 | gmaxwell: | phantomcircuit: I'm still waiting for the announcement that Bitcoin Consultancy is taking over operations for MTGox. |
02:51:12 | phantomcircuit: | gmaxwell, tomorrow! |
02:51:13 | phantomcircuit: | :/ |
02:51:23 | phantomcircuit: | that would be like... me at this point |
02:51:38 | antephialtic: | I think a DHT can work for small number of users if there is little churn. The problem is that in the real world, the probability that a random node joining the network will be long lived is fairly small |
02:52:09 | gmaxwell: | antephialtic: and no attackers. :( and no busted software and... |
02:52:20 | jgarzik: | phantomcircuit, you should re-hire Amir as spokesperson now that he has all this puffed notoriety from Darth Wallet ;p |
02:52:32 | jgarzik: | or if not spokesperson, then, something anyway. |
02:52:56 | kanzure: | phantomcircuit: what possible compensation would motivate you to bother |
02:54:02 | antephialtic: | gmaxwell: agreed, the multitude of failure scenarios are somewhat disheartening. Ian Stoica never mentioned that when he taught my class about Chord... |
02:54:23 | antephialtic: | 's/Ian/Ion' |
02:57:08 | gmaxwell: | DHT has suffered waves of amazing hype, but generally been an actual engineering failure because of poor performance when not contructed out of spherical cows. :) By itself this isn't so bad, but because of the use of it in torrent (even where it actually doesn't work much of the time when you'd need it) it's often invoked by people who can't even understand explinations about why it wouldn't apply or wouldn't work for whatever ... |
02:57:14 | gmaxwell: | ... they're suggesting it for... so I like to gripe, but don't mind me. |
02:58:10 | andytoshi: | gmaxwell: i actually had a person on PM -today- asking why an alt couldn't just use a DHT to use consensus |
02:58:27 | justanotheruser: | Would it be an advantage to have 2 minute block times? It would be harder for someone to beat the mainchain from an hour behind if they have 40% with 2 minute blocks than 10 minute blocks. |
02:58:32 | phantomcircuit: | that is hilarious |
02:58:39 | andytoshi: | he also wanted a POW that could only be computed by humans o.O |
02:58:40 | phantomcircuit: | stupid support system isn't properly designed |
02:58:45 | phantomcircuit: | sending passwords in emails |
02:59:02 | phantomcircuit: | except anybody with admin access can elect to receive emails sent to anybody on the account |
02:59:09 | tt_away: | justanotheruser: kinda sorta, depends on the number of tx going through the network. |
02:59:09 | gmaxwell: | andytoshi: hm. I thought that had died down some what. Perhaps they started hiding from me after I started sending out the ninja assault teams. |
02:59:13 | phantomcircuit: | so i have emails with the support techs passwords |
02:59:20 | tt_away: | justanotheruser: at 1mb blocks it'd be less secure |
02:59:23 | kanzure: | phantomcircuit: are you doing a security audit of someone? |
02:59:31 | phantomcircuit: | no this was just 100% on accident |
02:59:40 | kanzure: | sounds like fun |
02:59:41 | jcorgan: | andytoshi: buddhacoin mines by meditation |
02:59:45 | gmaxwell: | I hate when that happens. |
02:59:52 | justanotheruser: | tt_away: yeah, because there would be a higher ratio of propogationTime:BlockTime right? |
02:59:56 | gmaxwell: | TheGame coin, which you mine by not thinking about it. |
03:01:25 | justanotheruser: | andytoshi: Can I /q you? |
03:01:44 | andytoshi: | justanotheruser: sure, any time |
03:01:45 | tt_away: | justanotheruser: uh huh, see GHOST paper. at 192 kb per 2 min block maybe, but i'm not sure the advantage is huge. i suspect you might run into fee economics problems with smaller blocksizes too, as demand for inclusion in blocks in the 2 min timeframe means that people may start submitting higher and higher fees if the blocks are full, but i'm more waiting to see if that's true when blocks are 100% full on the bitcoin |
03:01:45 | tt_away: | network. |
03:01:52 | phantomcircuit: | justanotheruser, theoretically if you keep the bandwidth requirement the same but reduce the time between blocks you get more security, however you also get more overhead |
03:04:21 | gmaxwell: | tt_away: I'm skeptical about the GHOST paper in general, the additional blocks are not commited which means that the chain decisions are not necessarily consistent in the network. Seems very dangerous. I can describe a malicious mining situation where someone with moderate hashpower could potentially keep the network from converging for a very long time. (subject to implementation details that you'd have to get right which no one has ... |
03:04:28 | gmaxwell: | ... described) |
03:05:01 | gmaxwell: | (e.g. keep feeding out orthorgonal parts of the network orphaned children for seperate forks .. driving the forks constantly towards a tie. |
03:05:04 | gmaxwell: | ) |
03:06:07 | tt_away: | gmaxwell: I am too, but the one relevant bit I think was that at higher tx rates (full blocks) an attacker should be able to doublespend with <51% of the network, which i guess I hadn't considered before (but seems obvious now). |
03:06:08 | gmaxwell: | If people find lighter weight fast confirmations interesting you don't even need a hardfork, just softfork everyone onto a block content enforcing version of p2pool, and the p2pool sharechain then gives you your light confirmations. |
03:06:32 | tt_away: | * tt_away nods. |
03:06:43 | gmaxwell: | tt_away: oh well you can probablistically double spend with any share of the network. ... and with probablities that actually make the attacks attractive. |
03:06:52 | gmaxwell: | (given current hashpower consolidations) |
03:06:56 | tt_away: | gmaxwell: right. |
03:07:16 | gmaxwell: | the nice thing about the sharechain approach— beyond compatiblity— is that it doesn't increase overhead for non-miners. |
03:07:30 | gmaxwell: | (or at least non-miners who don't care about the latest fast confirmations) |
03:07:55 | tt_away: | gmaxwell: is there documentation for the sharechain stuff? i'm not super familiar with p2pool. |
03:08:29 | phantomcircuit: | gmaxwell, i like the p2pool style fast confirmations approach |
03:08:30 | tt_away: | and actually i'm starting to feel really ignorant about it, since it seems like an important breakthrough in decentralized mining that i should have understood a while ago. |
03:08:31 | kanzure: | https://en.bitcoin.it/wiki/P2Pool |
03:08:36 | tt_away: | kanzure: thanks |
03:08:46 | kanzure: | yeah, all the other descriptions except on the wiki suck |
03:09:15 | gmaxwell: | tt_away: it's pretty 'obvious' once you get it. |
05:43:03 | justanotheruser: | justanotheruser is now known as just[dead] |
06:05:11 | just[dead]: | just[dead] is now known as justanotheruser |
08:17:23 | Muis_: | Muis_ is now known as Muis |
11:41:20 | c--O-O: | c--O-O is now known as MagicalLies |
13:33:07 | HobGoblin: | HobGoblin is now known as Guest21087 |
13:40:45 | aksyn_: | aksyn_ is now known as aksyn |
13:52:49 | wump: | wump is now known as wumpus |
14:39:27 | iddo_: | iddo_ is now known as iddo |
15:39:52 | rastapopuloto: | rastapopuloto has left #bitcoin-wizards |
15:48:13 | weex_: | weex_ is now known as weex |
17:20:41 | amiller_: | amiller_ is now known as amiller |
17:44:16 | mappum: | mappum is now known as 5EXAANQ7F |
17:50:49 | HobGoblin: | HobGoblin is now known as Guest94002 |
19:36:17 | qwertyoruiop_: | qwertyoruiop_ is now known as qwertyoruiop |
19:44:38 | diesel_: | diesel_ is now known as flotsamuel |
19:50:13 | amiller: | today i'm studying this paper http://www.cs.virginia.edu/~mohammad/files/papers/15%20TimeStamp.pdf |
19:50:54 | amiller: | we spent a while previously trying to figure out whether you could check a sequential chain of proofs-of-work efficintly |
20:08:13 | petertodd: | amiller: it's interesting how this can be a co-operative scheme where you just take the chain of hashes with the most hashes/second |
20:49:58 | amiller: | still haven't quite got my head around this |
20:50:05 | amiller: | first of all it has some characteristics that are really unusual |
20:50:18 | amiller: | like the amount of storage needed by a prover is proportional to the total amount of work |
21:02:44 | tromp_: | andrew, where can i find a copy of your recent permacoin paper online? |
21:06:37 | tromp_: | i mean andrew->amiller |
21:06:59 | amiller: | i'll pm you, i haven't cleaned it up so i shouldn't be propagating it widely though |
21:11:22 | flotsamuel: | flotsamuel is now known as Guest67668 |
21:54:05 | HM: | HM is now known as nly |
21:58:50 | spin123456: | spin123456 is now known as spinza |
23:46:46 | amiller: | oh man i'm so thrilled with this petertodd |
23:46:48 | amiller: | this is absolutely a solution to that old problem we futzed around with |
23:46:50 | amiller: | if it can be built incrementally (i think it can) then we'd have a good way of statistically evaluating the work of a chain in a way that can't be off by very much |
23:46:53 | amiller: | i don't have a good intuition yet for the shape of the graph they use |
23:46:55 | amiller: | it uses "A-expanders" and relies on some 1975 proof by Erdos |
23:46:57 | amiller: | but the basic idea is, for any 0 < B < A < 1, there is a family of graphs of N verticies that have at most log(N) degree (so like skip lists / merkle mountain ranges) |
23:47:00 | amiller: | and the property is that ANY subset of vertices of length A*N |
23:47:02 | amiller: | contains a connected path of length B*N |
23:47:22 | amiller: | because this applies to *any* subset |
23:47:34 | amiller: | you don't have to actually check the whole path A |
23:47:46 | amiller: | the point is you can check some random subset of points |
23:48:02 | amiller: | and you can do a simple union bound kind of random argument to say |
23:49:04 | amiller: | if the attacker can forge a proof-of-workchain with some probability N |
23:49:12 | amiller: | er some probability p i dunno |
23:49:42 | amiller: | you could run that attacker some polynomial times like A*N*p times |
23:49:54 | amiller: | er lets say each challenge conists of k samples |
23:50:06 | amiller: | k is pretty small |
23:50:12 | amiller: | then you could run him A*N/k times |
23:50:24 | tacotime: | tacotime is now known as tacotime_ |
23:50:24 | amiller: | and you would get about A*N/k *distinct* nodes |
23:50:38 | amiller: | and from that subset of nodes you would be able to find a connected path |
23:50:56 | nsh: | amiller, what's this now? link? |
23:51:24 | gmaxwell: | yea, thats why expanders graphs can give you the polylog holographic proofs, same reason. |
23:51:25 | amiller: | and that's enough to prove that you can't fool the verifier in this case without doing pretty much as close to the amount of work as claimed in the chain |
23:51:39 | amiller: | http://www.cs.virginia.edu/~mohammad/files/papers/15%20TimeStamp.pdf |
23:51:43 | nsh: | ty |
23:51:56 | amiller: | so we had previously looked at doing this with an ordinary merkle tree |
23:52:02 | amiller: | that's that fabien coleho almost constant verificatino thing |
23:52:09 | amiller: | the problem is that the leaves are all computed in parallel |
23:52:19 | amiller: | the sequential depth of hte tree is only log N |
23:52:29 | amiller: | we also talked about having a tree built on top of a chain |
23:52:37 | amiller: | so first you compute all the leaves sequentially |
23:52:44 | amiller: | then build a merkle tree over them |
23:53:02 | amiller: | but the problem is most subsets of some size |
23:53:10 | amiller: | dont' contain an unbroken path of very much depth |
23:53:23 | amiller: | unless they contain all the leaves, basically |
23:53:51 | amiller: | i have no idea how to think of what an expander graph is, i should try implementing this i guess maybe it's not hard |
23:54:39 | amiller: | it would be nice if it turns out to be "like a binary tree" or "like a skip list" because i have a good grasp of those, hopefully it's not "like a spider web into another dimension"... |
23:54:41 | gmaxwell: | I don't think you can build it incrementally and have the expander property hold for subsets of the future size. |
23:54:47 | amiller: | hm. |
23:55:54 | gmaxwell: | MAYBE you can achieve some kind of quantization. |
23:56:22 | gmaxwell: | like a 128 node expander graph can be a proper subset of a 512 node expander graph.. but I think it cannot then be a subset of a 1024 node expander graph. |
23:59:44 | tacotime_: | amiller: Can you ELI5 how this paper fits into PT's sharding scheme? Are you using these time-lock puzzles to maintain temporal consistency for new branches mined on old blocks? |