00:06:40mike4:mike4 is now known as c--O-O
01:11:16EasyAt_:EasyAt_ is now known as EasyAt
01:48:49licnep__:licnep__ is now known as licep
01:48:53licep:licep is now known as licnep
01:52:30gmaxwell:gmaxwell is now known as gmaxwell_
01:53:15gmaxwell_:gmaxwell_ is now known as gmaxwell
01:56:17phantomcircuit:* phantomcircuit waves @ gmaxwell
02:12:31kanzure:hello
02:13:08kanzure:these guys are trying to slap in a credit system on top of "proof of storage",
02:13:11kanzure:https://groups.google.com/forum/#!topic/maidsafe-development/W-n-IQ_TUis
02:13:16gmaxwell:hi. so yea, I dunno how they can provide what they're saying there.
02:13:35kanzure:is there a proof that "proof of storage" is unachievable? i am sure some academic monkeys might have considered this.
02:14:12kanzure:i suppose really the problem is the introduction of a credit rather than the concept of using hashes for proofs
02:14:21gmaxwell:As I pointed out in #mtgox-chat you can compatly prove that you know some data, but ... right.
02:15:16amiller_:hi kanzure! i remember you from the jstor browser downloader thing :)
02:15:24kanzure:i enjoyed reading https://bitcointalk.org/index.php?topic=310323.0
02:15:39kanzure:yeah i was just about to start complaining about your https://gist.github.com/amiller/7131876
02:15:53kanzure:gmaxwell: oh yeah, i didn't realize you were the same gmaxwell that did the jstor torrent dump
02:16:15gmaxwell:well lets see.. hash tree your data ... encode it with a long error correcting code (e.g. for the amount of redundancy you want network wide.), now encrypt it.
02:16:18kanzure:gmaxwell: https://github.com/kanzure/pdfparanoia (actually, jstor implemented a workaround recently, so i need to go reverse engineer it...)
02:16:59gmaxwell:kanzure: for that dump they were super easy to extract, e.g. you could just pdf to images and then discard the color layers. :)
02:17:25gmaxwell:Now no two nodes should have the same data. So you might be able to graft a compesnation scheme onto that, at least so long as its the original storing party paying.
02:17:28kanzure:my approach is based on flate decoding and their position on the page
02:18:00kanzure:(i remove the actual pdfobject from the stream or w/e; in fact, i somewhat break the pdf file format by not mending the xtable table)
02:18:17gmaxwell:but you could still have all the users dump all the data onto a single real machine, so your redundancy might be ficticious... but they wouldn't save any storage due to that.
02:18:32gmaxwell:downside in that case is that the network can't heal without your help.
02:18:46kanzure:correct, but if you reward credit based on what the nodes report, then those colluding centralized nodes earn a greater amount of credit i think
02:19:16kanzure:they tried to make a handwavy comparison to mining :) it is cute: http://blog.maidsafe.net/2014/02/18/token-on-maidsafe-network/
02:19:29gmaxwell:yea, if its something other than the storing party paying then you're just hosed, because I can just make my data the digits of SHA256(secret||hash) so anyone can statelelessly simulate storing it.
02:20:33gmaxwell:wow there is a lot of completely confused gibberish on that page.
02:20:41kanzure:you know, i didn't even notice the watermarks were removed in your dump
02:20:53andytoshi:gmaxwell: can you decipher what maidsafe actually is yet?
02:20:54kanzure:torrents don't work as a mechanism for distributing science content. libgen has 7000 torrents that nobody seeds.
02:21:24gmaxwell:kanzure: They weren't removed, actually I published the watermarked copies, to make it clear that they were from JSTOR.
02:21:29kanzure:andytoshi: it just seems to be wrappers around generic storage concepts
02:21:31gmaxwell:(but I had previously removed them)
02:21:35kanzure:gmaxwell: oh i mean, watermarks like IP addresses?
02:21:45kanzure:andytoshi: https://github.com/maidsafe/MaidSafe/blob/master/tools/drive_test.py
02:21:50kanzure:andytoshi: https://github.com/maidsafe/MaidSafe/blob/master/tools/vault.py
02:21:59gmaxwell:kanzure: oh no, those weren't there. I left the logotype in. :)
02:22:07kanzure:i see
02:22:13gmaxwell:(they weren't originally there either, because reasons.)
02:22:26kanzure:i recently met someone who was ex-jstor
02:22:38kanzure:he was happy to spill the beans about terrible internal architecture stuff :)
02:22:49gmaxwell:kanzure: torrent is terrible, but it's an okay bulk method to get data to other people who will make it available via better mechenisms.
02:23:07kanzure:right- specifically i mean the social adoption around torrents, not the protocol itself
02:23:10kanzure:nobody seeds etc etc
02:23:19gmaxwell:yea.
02:23:34gmaxwell:the torrent webseed stuff was basically junk last I checked too.
02:24:13kanzure:it's a hard problem to solve. let's just pretend a certain someone has multiple terabytes of extremely high-quality pdfs. now what.. there's basically no humanitarian org that will touch it (argh).
02:24:23kanzure:internet archive is generally uninterested
02:24:25tt_away:ethereum was proposing data storage contracts based around doing verification of merkle trees containing the data, i think they're in the ethereum whitepaper.
02:27:53jcorgan:gmaxwell: what was it you didn't like about the concept of a torrent web seed?
02:31:15gmaxwell:jcorgan: its a great concept, but at least when I last tried they basically didn't work— required server considerations and then some/many/most torrent clients wouldn't actually use them. Maybe its improved.
02:34:59jcorgan:we distribute the GNU Radio dvd ISO files via torrent; it has a web seed off our server, seems to work well enough (no complaints), and usually we only have 4-5 seeders or less.
02:35:46jcorgan:i was just curious about your comment as this is the only experience i have with web seeded torrents, not a big torrent user myself
02:36:56gmaxwell:may be working better now then, my expirence is dated by a few years.
02:39:11jgarzik:Web seed is just a slack way of getting around not running a torrent daemon. But it seems like, if you have a 24/7 web server, you can likely figure out how to set up a 24/7 torrent seed.
02:39:32jgarzik:Web seeding is definitely broken or odd in several clients
02:40:03gmaxwell:jgarzik: if it worked super well it would allow you to use a regular true HTTP only CDN, where you couldn't necessarily run a full time seed.
02:40:07kanzure:if you are running a web server then users will probably pester you to just dump the data over HTTP
02:40:21jgarzik:gmaxwell, no argument
02:40:37jgarzik:but it never worked well, so nobody really uses it or demands good support, so it never works well
02:45:58just[dead]:just[dead] is now known as justanotheruser
02:47:07phantomcircuit:jgarzik, iirc the only client that ever supported it correctly was... something that started with an s
02:47:14phantomcircuit:i cant even remember its been so long
02:47:37phantomcircuit:shareaza
02:47:55jgarzik:the DHT experience was disappointing, too
02:48:23maaku_:* maaku_ would participate in a #bittorrent-wizards channel
02:48:24jgarzik:major clients like Vuze have their own torrent DHTs, and thus, their own islands separate from the rest of the world
02:48:33phantomcircuit:jgarzik, it takes thousands of peers before dht works well
02:48:53jgarzik:phantomcircuit, that was not my experience
02:49:10phantomcircuit:jgarzik, DHT + peer exchange can merge the islands
02:49:24phantomcircuit:but you need ~20 people from each of the different clients
02:49:44jgarzik:phantomcircuit, where the DHT was the mainline DHT, and was enabled, and was compiled into the client... it worked. unfortunately all those conditions narrowed the client list dramatically
02:49:50phantomcircuit:generally speaking to get that kind of distribution requires a significant multiple of 20
02:50:01gmaxwell:DHT regardless of the client seems to basically not work for small numbes of users.
02:50:13phantomcircuit:also oh shit accidentally walked onto a security issue
02:50:17phantomcircuit:(not in bitcoin)
02:50:19phantomcircuit:sigh
02:50:46phantomcircuit:incompetence everywhere tooo the max!
02:50:55gmaxwell:phantomcircuit: I'm still waiting for the announcement that Bitcoin Consultancy is taking over operations for MTGox.
02:51:12phantomcircuit:gmaxwell, tomorrow!
02:51:13phantomcircuit::/
02:51:23phantomcircuit:that would be like... me at this point
02:51:38antephialtic:I think a DHT can work for small number of users if there is little churn. The problem is that in the real world, the probability that a random node joining the network will be long lived is fairly small
02:52:09gmaxwell:antephialtic: and no attackers. :( and no busted software and...
02:52:20jgarzik:phantomcircuit, you should re-hire Amir as spokesperson now that he has all this puffed notoriety from Darth Wallet ;p
02:52:32jgarzik:or if not spokesperson, then, something anyway.
02:52:56kanzure:phantomcircuit: what possible compensation would motivate you to bother
02:54:02antephialtic:gmaxwell: agreed, the multitude of failure scenarios are somewhat disheartening. Ian Stoica never mentioned that when he taught my class about Chord...
02:54:23antephialtic:'s/Ian/Ion'
02:57:08gmaxwell:DHT has suffered waves of amazing hype, but generally been an actual engineering failure because of poor performance when not contructed out of spherical cows. :) By itself this isn't so bad, but because of the use of it in torrent (even where it actually doesn't work much of the time when you'd need it) it's often invoked by people who can't even understand explinations about why it wouldn't apply or wouldn't work for whatever ...
02:57:14gmaxwell:... they're suggesting it for... so I like to gripe, but don't mind me.
02:58:10andytoshi:gmaxwell: i actually had a person on PM -today- asking why an alt couldn't just use a DHT to use consensus
02:58:27justanotheruser:Would it be an advantage to have 2 minute block times? It would be harder for someone to beat the mainchain from an hour behind if they have 40% with 2 minute blocks than 10 minute blocks.
02:58:32phantomcircuit:that is hilarious
02:58:39andytoshi:he also wanted a POW that could only be computed by humans o.O
02:58:40phantomcircuit:stupid support system isn't properly designed
02:58:45phantomcircuit:sending passwords in emails
02:59:02phantomcircuit:except anybody with admin access can elect to receive emails sent to anybody on the account
02:59:09tt_away:justanotheruser: kinda sorta, depends on the number of tx going through the network.
02:59:09gmaxwell:andytoshi: hm. I thought that had died down some what. Perhaps they started hiding from me after I started sending out the ninja assault teams.
02:59:13phantomcircuit:so i have emails with the support techs passwords
02:59:20tt_away:justanotheruser: at 1mb blocks it'd be less secure
02:59:23kanzure:phantomcircuit: are you doing a security audit of someone?
02:59:31phantomcircuit:no this was just 100% on accident
02:59:40kanzure:sounds like fun
02:59:41jcorgan:andytoshi: buddhacoin mines by meditation
02:59:45gmaxwell:I hate when that happens.
02:59:52justanotheruser:tt_away: yeah, because there would be a higher ratio of propogationTime:BlockTime right?
02:59:56gmaxwell:TheGame coin, which you mine by not thinking about it.
03:01:25justanotheruser:andytoshi: Can I /q you?
03:01:44andytoshi:justanotheruser: sure, any time
03:01:45tt_away:justanotheruser: uh huh, see GHOST paper. at 192 kb per 2 min block maybe, but i'm not sure the advantage is huge. i suspect you might run into fee economics problems with smaller blocksizes too, as demand for inclusion in blocks in the 2 min timeframe means that people may start submitting higher and higher fees if the blocks are full, but i'm more waiting to see if that's true when blocks are 100% full on the bitcoin
03:01:45tt_away:network.
03:01:52phantomcircuit:justanotheruser, theoretically if you keep the bandwidth requirement the same but reduce the time between blocks you get more security, however you also get more overhead
03:04:21gmaxwell:tt_away: I'm skeptical about the GHOST paper in general, the additional blocks are not commited which means that the chain decisions are not necessarily consistent in the network. Seems very dangerous. I can describe a malicious mining situation where someone with moderate hashpower could potentially keep the network from converging for a very long time. (subject to implementation details that you'd have to get right which no one has ...
03:04:28gmaxwell:... described)
03:05:01gmaxwell:(e.g. keep feeding out orthorgonal parts of the network orphaned children for seperate forks .. driving the forks constantly towards a tie.
03:05:04gmaxwell:)
03:06:07tt_away:gmaxwell: I am too, but the one relevant bit I think was that at higher tx rates (full blocks) an attacker should be able to doublespend with <51% of the network, which i guess I hadn't considered before (but seems obvious now).
03:06:08gmaxwell:If people find lighter weight fast confirmations interesting you don't even need a hardfork, just softfork everyone onto a block content enforcing version of p2pool, and the p2pool sharechain then gives you your light confirmations.
03:06:32tt_away:* tt_away nods.
03:06:43gmaxwell:tt_away: oh well you can probablistically double spend with any share of the network. ... and with probablities that actually make the attacks attractive.
03:06:52gmaxwell:(given current hashpower consolidations)
03:06:56tt_away:gmaxwell: right.
03:07:16gmaxwell:the nice thing about the sharechain approach— beyond compatiblity— is that it doesn't increase overhead for non-miners.
03:07:30gmaxwell:(or at least non-miners who don't care about the latest fast confirmations)
03:07:55tt_away:gmaxwell: is there documentation for the sharechain stuff? i'm not super familiar with p2pool.
03:08:29phantomcircuit:gmaxwell, i like the p2pool style fast confirmations approach
03:08:30tt_away:and actually i'm starting to feel really ignorant about it, since it seems like an important breakthrough in decentralized mining that i should have understood a while ago.
03:08:31kanzure:https://en.bitcoin.it/wiki/P2Pool
03:08:36tt_away:kanzure: thanks
03:08:46kanzure:yeah, all the other descriptions except on the wiki suck
03:09:15gmaxwell:tt_away: it's pretty 'obvious' once you get it.
05:43:03justanotheruser:justanotheruser is now known as just[dead]
06:05:11just[dead]:just[dead] is now known as justanotheruser
08:17:23Muis_:Muis_ is now known as Muis
11:41:20c--O-O:c--O-O is now known as MagicalLies
13:33:07HobGoblin:HobGoblin is now known as Guest21087
13:40:45aksyn_:aksyn_ is now known as aksyn
13:52:49wump:wump is now known as wumpus
14:39:27iddo_:iddo_ is now known as iddo
15:39:52rastapopuloto:rastapopuloto has left #bitcoin-wizards
15:48:13weex_:weex_ is now known as weex
17:20:41amiller_:amiller_ is now known as amiller
17:44:16mappum:mappum is now known as 5EXAANQ7F
17:50:49HobGoblin:HobGoblin is now known as Guest94002
19:36:17qwertyoruiop_:qwertyoruiop_ is now known as qwertyoruiop
19:44:38diesel_:diesel_ is now known as flotsamuel
19:50:13amiller:today i'm studying this paper http://www.cs.virginia.edu/~mohammad/files/papers/15%20TimeStamp.pdf
19:50:54amiller:we spent a while previously trying to figure out whether you could check a sequential chain of proofs-of-work efficintly
20:08:13petertodd:amiller: it's interesting how this can be a co-operative scheme where you just take the chain of hashes with the most hashes/second
20:49:58amiller:still haven't quite got my head around this
20:50:05amiller:first of all it has some characteristics that are really unusual
20:50:18amiller:like the amount of storage needed by a prover is proportional to the total amount of work
21:02:44tromp_:andrew, where can i find a copy of your recent permacoin paper online?
21:06:37tromp_:i mean andrew->amiller
21:06:59amiller:i'll pm you, i haven't cleaned it up so i shouldn't be propagating it widely though
21:11:22flotsamuel:flotsamuel is now known as Guest67668
21:54:05HM:HM is now known as nly
21:58:50spin123456:spin123456 is now known as spinza
23:46:46amiller:oh man i'm so thrilled with this petertodd
23:46:48amiller:this is absolutely a solution to that old problem we futzed around with
23:46:50amiller:if it can be built incrementally (i think it can) then we'd have a good way of statistically evaluating the work of a chain in a way that can't be off by very much
23:46:53amiller:i don't have a good intuition yet for the shape of the graph they use
23:46:55amiller:it uses "A-expanders" and relies on some 1975 proof by Erdos
23:46:57amiller:but the basic idea is, for any 0 < B < A < 1, there is a family of graphs of N verticies that have at most log(N) degree (so like skip lists / merkle mountain ranges)
23:47:00amiller:and the property is that ANY subset of vertices of length A*N
23:47:02amiller:contains a connected path of length B*N
23:47:22amiller:because this applies to *any* subset
23:47:34amiller:you don't have to actually check the whole path A
23:47:46amiller:the point is you can check some random subset of points
23:48:02amiller:and you can do a simple union bound kind of random argument to say
23:49:04amiller:if the attacker can forge a proof-of-workchain with some probability N
23:49:12amiller:er some probability p i dunno
23:49:42amiller:you could run that attacker some polynomial times like A*N*p times
23:49:54amiller:er lets say each challenge conists of k samples
23:50:06amiller:k is pretty small
23:50:12amiller:then you could run him A*N/k times
23:50:24tacotime:tacotime is now known as tacotime_
23:50:24amiller:and you would get about A*N/k *distinct* nodes
23:50:38amiller:and from that subset of nodes you would be able to find a connected path
23:50:56nsh:amiller, what's this now? link?
23:51:24gmaxwell:yea, thats why expanders graphs can give you the polylog holographic proofs, same reason.
23:51:25amiller:and that's enough to prove that you can't fool the verifier in this case without doing pretty much as close to the amount of work as claimed in the chain
23:51:39amiller:http://www.cs.virginia.edu/~mohammad/files/papers/15%20TimeStamp.pdf
23:51:43nsh:ty
23:51:56amiller:so we had previously looked at doing this with an ordinary merkle tree
23:52:02amiller:that's that fabien coleho almost constant verificatino thing
23:52:09amiller:the problem is that the leaves are all computed in parallel
23:52:19amiller:the sequential depth of hte tree is only log N
23:52:29amiller:we also talked about having a tree built on top of a chain
23:52:37amiller:so first you compute all the leaves sequentially
23:52:44amiller:then build a merkle tree over them
23:53:02amiller:but the problem is most subsets of some size
23:53:10amiller:dont' contain an unbroken path of very much depth
23:53:23amiller:unless they contain all the leaves, basically
23:53:51amiller:i have no idea how to think of what an expander graph is, i should try implementing this i guess maybe it's not hard
23:54:39amiller:it would be nice if it turns out to be "like a binary tree" or "like a skip list" because i have a good grasp of those, hopefully it's not "like a spider web into another dimension"...
23:54:41gmaxwell:I don't think you can build it incrementally and have the expander property hold for subsets of the future size.
23:54:47amiller:hm.
23:55:54gmaxwell:MAYBE you can achieve some kind of quantization.
23:56:22gmaxwell:like a 128 node expander graph can be a proper subset of a 512 node expander graph.. but I think it cannot then be a subset of a 1024 node expander graph.
23:59:44tacotime_:amiller: Can you ELI5 how this paper fits into PT's sharding scheme? Are you using these time-lock puzzles to maintain temporal consistency for new branches mined on old blocks?