--- Log opened Fri Jan 17 00:00:09 2014 01:10 < maaku_> petertodd: that wouldn't be a bad outcome 01:11 * maaku_ dreams of commodity supercomputers 01:57 < CodeShark> opinions? https://github.com/CodeShark/bitcoin/compare/coinparams_new 02:21 < wumpus> CodeShark: I'm ok with moving more chain-specific configuration (such as MoneyRange) to chainparams, but adding all those redundant hashing algorithms isn't going to make it into mainline imo 02:21 < CodeShark> right, I realize that - I was considering a plugin model 02:22 < CodeShark> scrypt.so, hash9.so, etc... 02:23 < wumpus> hmm I don't know 02:23 < CodeShark> or perhaps a compiletime flag to statically link to a particular hash function 02:23 < wumpus> I'm all for making the source more modular, and making it into libraries, but loadable libraries brings a lot of problems of their own 02:24 < CodeShark> what are your concerns? 02:25 < wumpus> security mainly, incompatibility, general so/dll hell 02:25 < wumpus> for now I'd more like a modular approach based on libraries (which can get statically linked into the end product) 02:25 < CodeShark> so then perhaps a way to specify a list of static modules to link at compiletime 02:26 < wumpus> or make it possible to install the bitcoin core as a library, so that actual implementations/daemons can compile and link against it 02:27 < wumpus> or other applications that may need the bitcoin consensus stuff for their own purposes 02:28 < wumpus> anyway, lots of options, but: no altcoin specific stuff in bitcoin/bitcoin please 02:28 < CodeShark> for other applications I'm thinking more of a service-oriented architecture, with a core engine providing runtime services to other processes 02:29 < CodeShark> yeah, the intention wasn't to merge the altcoin specific stuff in bitcoin/bitcoin 02:29 < CodeShark> just to expose the ability to customize the core engine 02:29 < wumpus> okay 02:30 < CodeShark> the inclusion of scrypt and hash9 in particular is a total hack at this point, just intended to test the basic idea 02:34 < CodeShark> I'm also thinking that rather than trying to parametrize things like block reward and retargetting rules it would be better to also use a statically linked module approach 02:41 < wumpus> let's move this to #bitcoin-dev 08:20 < adam3us> amiller: when you're awake about fractional blocks, I am wondering if there is an incentive issue. if a 0.1 block collects .1 of fees and is easily orphanable by a powerful miner, what motive do they have to not selfishly orphan it to collect the other 10% of the fee. 09:39 < _ingsoc> andytoshi: Where are the -wizards logs again? 09:40 < andytoshi> _ingsoc: http://download.wpsoftware.net/bitcoin/wizards/ 09:41 < _ingsoc> Ty. 09:41 < michagogo|cloud> (That really belongs in the topic... 09:42 < andytoshi> no worries, i'm afraid you'll have a lot to scroll through, the last three days have been obscenely busy on this channel 09:42 < michagogo|cloud> ) 13:36 < gwern> I believe we were discussing ethereum before? might be of interest: https://bitslog.wordpress.com/2014/01/17/ethereum-dagger-pow-is-flawed/ http://www.reddit.com/r/ethereum/comments/1vgqa7/ethereum_dagger_pow_function_is_flawed/ 13:36 < Ursium> hi gwern, yes i saw that 13:37 < Ursium> i believe the founders are aware as i remember reading about this very issue a while back. 13:39 < petertodd> Ursium: that's not a very good analysis: sequential memory hardness isn't all it's cut up to be for real-world hardware designs 13:40 < Ursium> petertodd: i see! 13:40 < petertodd> Ursium: not to say his point is necessarily invalid, but what needs to be done is to get an *actual* hardware engineer on board rather than just a bunch of software people theorizing about what makes something asic hard 13:41 < Ursium> makes sense. Will be interesting to follow for sure 13:42 < sipa> (upcoming ad-hominem) the author suggesting x86 as script code doesn't inspire much confidence 13:42 < petertodd> sipa: +1 13:43 < maaku> i think there's a valid technical point in that ad-hominem 13:43 < Ursium> sipa: i believe they suggest C-like scripting which converts back to a very limited set of opcodes - so only interactions with the blockchain etc. What do you guys think? 13:43 < sipa> maaku: yes, but it's irrelevant to the issue being discussed 13:44 < maaku> Ursium: see the logs for the past few days. we've had some interesting discussions about what you can do with a more powerful script 13:44 < maaku> mostly related to covenants 13:44 < petertodd> Ursium: the idea of extrospective scripts is a good one, how to implement them is another issue 13:44 < maaku> you would *not* want to do so using an ad-hoc CISC language, however 13:45 < petertodd> maaku: speaking of: you realize that for colored coins and many other covenants, you actually only need to look *backwards*, so they aren't really covenants and have no issues 13:45 < maaku> you'd need something amenable to static analysis (e.g. a strongly typed stack language) 13:45 < petertodd> maaku: or a single type :P 13:46 < maaku> petertodd: ? for CC you need to look at the outputs of the current transaction to avoid inflation 13:46 < maaku> well, functions/combinators are types... 13:47 < maaku> michagogo|cloud: I'm allowed to op, but not change the title for some reason. Is that a different permission? 13:47 < petertodd> maaku: NO! the magic CC script can be written such that it itself checks the transaction recursively, which means that all you have to do is check that the CC script would see the current "tip" transaction as valid one step back 13:47 < maaku> oh yes i see how that would work 13:48 < maaku> you lose SPV compatability though 13:48 < petertodd> maaku: no you don't! SPV compat is still there because you only need to check one step back to know the whole chain is valid 13:48 < petertodd> maaku: remember, the magic CC script can only exist in the scriptSig if the previous tx included the magic CC script in the scriptSig, all the way to the genesis condition 13:48 < gwern> huh. induction in real life. 13:48 < petertodd> gwern: yes 13:49 < maaku> only if the coins become unspendable if they were invalidly constructed 13:50 < petertodd> maaku: no, the coins are always spendable, but you can't spend them with a transaction scriptSig that matches the CC script checker 13:50 < petertodd> maaku: IE, you can get the coins back under all circumstances, you just can't make them colored fraudulently 13:51 < maaku> petertodd: ok, walk me through this. I create a transaction marking all the outputs as 'blue' with a CC script prefix 13:51 < gmaxwell> you make the script only allow you to assign it if it was used previously or if some birth criteria is met. 13:52 < petertodd> maaku: *no*, you make a transaction that in the scriptSig includes the CC validity script 13:52 < gmaxwell> E.g. this color coin scrpit can be applied if the parent txout is txid:vout or if the parent script had this script on it. 13:52 < petertodd> maaku: now, if you are the genesis tx, you have a separate code-path that checks a signature or that the txin is something specific or whatever 13:52 < gmaxwell> you use the first rule to give birth to the colored coin and then the rest can only be children of it. 13:52 < petertodd> gmaxwell: can only be children of it *or* not colored 13:52 < gmaxwell> or not colored, indeed. 13:53 < gmaxwell> don't want it to be viral. 13:54 < petertodd> yup 13:54 < maaku> ok how do you restrict which outputs are colored? 13:54 < adam3us> gwern, petertodd: i am not sure how important sequential memory hard is Lerner says "must not allow easy parallelization 13:55 < petertodd> maaku: the restriction is that you can't make the CC script execute unless the transaction only creates valid CC outputs 13:55 < adam3us> gwern, petertodd: however mining is inherently massively parallelizable by intent and necessity; what does it matter if its micro parallelizable as well as macro-parallelizable 13:55 < petertodd> maaku: but you can still spend the outputs, it just means the outputs aren't colored 13:56 < petertodd> adam3us: the difference is that micro-paralelization has different characteristics due to how memory works; the idea is that if you use some block of ram sequentially *at the scale of PoW mining hardware* that forces you to implement it in ways that looks like commodity hardware 13:57 < maaku> petertodd: so the outputs are still tagged? 13:57 < petertodd> adam3us: the problem is this isn't a hard-and-fast rule - it's not that his argument is invalid, just that he needs to analyze it much more carefully than that 13:58 < petertodd> maaku: well, one way to do it would be to rely on the txout index, so you might commit to what values spends of the various txouts are allowed to have in some merkle tree without actually evaluating the txout scriptPubKeys directly 13:58 < adam3us> petertodd: well if it was really sequentially accessed its cacheable and address calc pipelineable. seems more like scrypt's romix component with random access is more pausible. 13:59 < maaku> petertodd: btw, have you considered looking at how modern commodity hardware is difficient, and focusing on proof of work that would improve the situation if commoditized? 13:59 < petertodd> maaku: basically the scriptSig contains: 14:00 < petertodd> maaku: that's what SHA256 does, *if* ASIC mfg capacity is available, then a really simple PoW like sha256 is ideal because your startup costs to make a miner ASIC are low 14:00 < petertodd> adam3us: sequential != cachable 14:00 < maaku> petertodd: oh, i mean things like highly interconnected cores, greater memory bandwidth, etc. 14:01 < petertodd> adam3us: for real-world memory sequential is faster however, due to the fact that real-world ram talks to cpu's on a bank level, among many other considerations 14:02 < adam3us> maaku: i like the parallela chips. many risc cores on a chip. like a gpu but without the custom graphics stuff and without SIMD 14:02 < petertodd> maaku: well, see the problem with stuff like that is what is available as commodity changes over time; you have to target some architecture with a very high chance of existing in the future 14:02 < maaku> adam3us: like Cell and APU 14:02 < adam3us> petertodd: right. sequential access is faster on existing hardware because they optimie for it 14:03 < petertodd> adam3us: not so much because they optimize for it, but because it's the only possible way to build the hardware 14:03 < petertodd> adam3us: I mean, you could optimize for something else, but the limitations of silicon strongly suggest bank-accessed designs 14:03 < maaku> petertodd: no that's my point - you target something which you would like to become available, because it is beneficial for other purposes (e.g. those are things I would like for commodity supercomputers) 14:03 < maaku> or rather, things which are available now but in limited quantities 14:04 < petertodd> adam3us: similarly you need designs where the cpu<->mem interface happens in packets because high-speed parallel busses are impossible to make 14:04 < maaku> and let the market push industry further in that direction 14:04 < petertodd> maaku: oh, I see, yeah, but that's a very risky strategy that's just as likely to lead to some ASIC that's overly optimized and useless for any real-world thing 14:05 < petertodd> maaku: for instance, PoW mining can tolerate way higher error rates than almost any other application 14:06 < adam3us> maaku: so one hypothesis is to use halting problem logic to search for instruction sequences and what state they put some memory into or something like that of an open risc cpu design. if people want to make those fast thats a public good. however typically there is going tobe something that can be stripped to avanage to mae them faster/more energe efficient as miners.. but yes its an interesting direction. 14:08 < adam3us> maaku: basically the lesson i draw is a) hardware wins; b) a lot of software people dont now much about highly optimized custom hw design nor the limiting factors 14:09 < petertodd> adam3us: hence why I think we're a lot more likely to come up with PoW that is FPGA-soft rather than FPGA-hard 14:11 < adam3us> in some way of thinking re jtimon argument yesterday about energy efficiency, asic hashcash-sha256^2 could be argued to be more energy efficient than gpu-hard. so other than the centralization issues coming from hw manuf barrier to entry perhaps thats not so bad. (ie the more profitable mining is above investment the more likely it is to be energy efficient) 14:11 < petertodd> adam3us: meh, I like to beat on nature 14:13 < adam3us> the other obvious approach is to change the PoW periodically, or put dozens of building blocks into it and change the way they are connected to define new mining variants. have % of reward allocated to different PoW params, and adjust the difficulty of each param-set to match the % target. 14:13 < jtimon> adam3us I guess jgarzik just convinced me here http://www.coindesk.com/bitcoin-developer-jeff-garzik-on-altcoins-asics-and-bitcoin-usability/ 14:13 < jtimon> sorry afk for some time 14:14 < petertodd> jtimon: ugh... that's really ill-informed 14:14 < petertodd> jtimon: god-damn software engineers :p 14:14 < adam3us> u note how sergio lerner posted that mem hard pow with todays date. he seems to be a bit secretive and then reveal things when pushed. he still didnt reveal his claimed coin anonymity 14:15 < petertodd> adam3us: one thing that worries me about "change the pow constantly" schemes is they can turn the "ASIC-hard" problem into a secret *software* problem 14:16 < petertodd> adam3us: IE, if I'm a FPGA mfg and I put my experts onto the problem of making meta FPGA programming code to target the PoW most efficiently every day 14:16 < petertodd> adam3us: that industry has enough secrets that it'd be a winner-take-all situation potentially 14:21 < petertodd> Oh, here's a nice proof-of-existence: suppose you have a scrypt-like sequential-hard PoW function. Now, they kinda suck because to verify them you need a ton of RAM and a lot of CPU power right? However, you can also make a SCIP/ZK-SNARK style proof of the pow solution and verify that instead. 14:21 < petertodd> Thus we know you can make sequential-hard PoW with fast verification. 14:22 < petertodd> Of course, there's the real-world problem where the SNARK proof-creation is a better PoW than the scrypt... :) 14:22 < petertodd> maaku: ^ though that might be a useful way to optimize SNARK proof-creation of course... 14:22 < petertodd> (I think gmaxwell? suggested basically that for SCIP stuff?) 14:25 < adam3us> petertodd: yes. my supposition would be the hw people would make fpgas with reconfigurable buses etc between the lumpy modules tht can be rewired the same as the sw. "hw wins" etc 14:25 < gmaxwell> though even though snark validation is fast it's still slower than SHA256 by a fair margin. (see vntinyram paper for state of the art numbers on the verification of the ggpr stuff... but anything else is not going to be much faster) 14:26 < petertodd> now the non-snark using version of that is easier to understand: fill up some ram with the function D[i] = H(D[i-1]), then do a merkle-tree over the ram and do samples to prove the transitions are honest, but the issue there is basically that the # of samples you pick relates very strongly to how parallelizable you can get away with without a high chance of getting caught out on fraud 14:26 < petertodd> gmaxwell: yeah, faster than scrypt though right? 14:27 < gmaxwell> dunno. scrypt as used in ltc is slow but it might just be compariable. 14:27 < petertodd> gmaxwell: yeah, anyway, the PoW validation slowness isn't a deal-breaker, just annoying 14:27 < petertodd> gmaxwell: bigger issue is that really ASIC-hard PoW's are a lot slower anduse a lot more ram than scrypt... 14:28 < adam3us> petertodd: i think the fiat-shamir transform can make the failure from skipping calc steps start to lose fast. this is what coelho merkle hash PoW introduced and dagger users even more links to reduce like 3% dow to < 1% 14:28 < petertodd> gmaxwell: (well, LTC-style scrypt params) 14:29 < petertodd> adam3us: yup, however what's nasty about it is if you start thinking about how fast actual hash primatives really are - a fair bit slower than main memory bandwidth right now 14:30 < adam3us> petertodd: indeed. i would help if people used a faster hash or the custom design u mentioned yesterday (hash rounds spread across the tree) 14:31 < petertodd> adam3us: yeah, also re: my "fraud == parallelism" argument, maybe you want the bottom of the tree to be fairly big chunks of memory being hashed anyway, which makes spreading a strong hash out make more sense 14:31 < petertodd> adam3us: like I was sayng above about how ram is banked anyway 14:33 < petertodd> adam3us: oh, and here's a consideration: you probably want to minimize the time and space of the merkle tree over the data being hashed, because if you don't you can optimize by making a better merkle hasher 14:33 < gmaxwell> petertodd: sequentiality to some extent prohibits being progress free, so unless the sequential part is very fast you are creating an advantage for faster miners. 14:33 < petertodd> For instance, notice how the # of nodes in a full binary tree is 2x the bottom layer, so you do need the bottom layer work cost to be >> making the tree 14:34 < petertodd> gmaxwell: well how fast is fast enough? I'd argue keep the PoW creation to < 1s or so and it's in line with latency assumptions anyway 14:35 < gmaxwell> I think it has to be a small fraction of latency in order to not matter. 14:35 < gmaxwell> keep in mind wrt the snark idea: snark _creation_ will always be much slower than execution 14:36 < petertodd> gmaxwell: I would have thought a small fraction of block interval - network latency is a similar impact to PoW latency 14:36 < adam3us> he watching the dexel/alpha indian peoples video on their coming 28nm script asic. did anyone figure out of their demo fpga version was a net win already? this should be fun to watch if they deliver 14:36 < petertodd> gmaxwell: oh, obviously if you do it snark-style you've gotta have the snark proof finish in < 1s - very difficult 14:36 < petertodd> gmaxwell: although maybe ok if the snakr is only for the sake of SPV 14:36 < gmaxwell> adam3us: the ltc fpgas were a power usage win. 14:36 < gmaxwell> petertodd: well in particular because you could do proofs of the whole sum rather than a single header. 14:37 < petertodd> gmaxwell: note how with all this stuff I'll bet you having a FPGA attached to some RAM would be a power win 14:37 < petertodd> gmaxwell: good point 14:56 < adam3us> gmaxwell: i believe that is correct. (progress freedom and ratio of minimum work unit on single core to block interval) 15:00 < adam3us> gmaxwell: it places a limit on how memory hard you can hope to be, which also relates to the fastest crypto hash that can drive the memory 15:00 < Luke-Jr> FPGAs were only a power win for Bitcoin as well 15:01 < andytoshi> petertodd: you don't need to have fast snarks if your block time is something like several hours 15:01 < andytoshi> that may be desirable anyway if you want an anonymous high-latency mechanism for getting txes to miners in the first place 15:01 < petertodd> andytoshi: true, although I suspect several hour block intervals have user-acceptance problems 15:02 < andytoshi> yeah, i really doubt such a system would be good for general use. but there are situations where several-day verification is ok (and it still beats visa :P), eg if there are long shipping or manufacturing times anyway 15:05 < gmaxwell> keep in mind that someday bitcoin might be several hour confirmations, if you can't count on the network to converge in one block anymore e.g. due to implementation inconsistencies, high latencies, bursty mining due to mining for fees, gnarly behavior from miners wrt "rational mining" that is willing to reorg if it's positive expectation 15:05 < gmaxwell> even with 10 minute blocks. 15:13 < adam3us> btw i was thinking part of the anti-litecoin fast confirmation argument maybe partly false. it can be claimed that well 12 lite coin confirms (30min) is weaker than 6 bitcoin ones (60mins). but consider your probability as a selfish miner of winning with p^24 << p^6. in fact even p^12 << p^6 etc. 15:14 < Luke-Jr> adam3us: consider the attacker doesn't need to worry about stale blocks also 15:14 < Luke-Jr> adam3us: there are a lot of factors involve 15:14 < Luke-Jr> d 15:14 < Luke-Jr> fast blocks = more hashes wasted by the legit miners 15:14 < Luke-Jr> scrypt = slower block propagation 15:14 < adam3us> Luke-Jr: oh yes (i said partly) the short block time is worse for orhans and igives well connected low latency miners more advantage 15:14 < adam3us> Luke-Jr: that too 15:15 < Luke-Jr> adam3us: right; there's advantages and disadvantages 15:15 < Luke-Jr> IMO they more or less balance out 15:19 < adam3us> i wonder what it does for selfish mining attack though. the ghost (hash in non-conflicting orphans) approach seemingly allows faster blocks becuase orphans are not wasted. so hypothetically ghost + bitcoin/sha256 mining + eg 2.5min intervals. and still 1hr confirmations . i wonder if the selfish miner loses in that circumstance 16:54 < jtimon> Luke-Jr I don't see the balance, Scrypt is neither "anti-ASIC" nor anti-GPU 16:55 < jtimon> what's the gain Scrypt has over SHA256 ? 16:55 < phantomcircuit> jtimon, nothing 16:56 < sipa> scrypt is certainly anti-gpu, if it'd use more than 128 KiB of RAM... 16:56 < Luke-Jr> jtimon: there is none, that's my point 16:57 < sipa> despite that, i'm very unconvinced that it has any advantages for bitcoin or similar systems 16:57 < EasyAt> scrypt ASICs will have a giant die size, no? 16:57 < c0rw1n> aq@gfa128KB ram desn't take much die space 16:58 < sipa> well the point would be to make the cost of the ASIC be dominated by fast memory 16:58 < jtimon> " IMO they more or less balance out" ok so this is just sarcasm? 16:58 < Luke-Jr> jtimon: no? we're talking about faster block times there, and how it doesn't make transactions any faster really. 17:00 < jtimon> Luke-Jr ok thans 17:00 < jtimon> thanks 17:00 < jtimon> sipa what's the point of "make the cost of the ASIC be dominated by fast memory" 17:00 < jtimon> ? 17:01 < sipa> not saying this is a good idea, just reasoning how you'd make an anti-asic pow 17:01 < gmaxwell> It's an attempt to reduce the gap between commodity hardware and specialized hardware. 17:02 < sipa> if the cost of the asic is dominated by memory, it's unlikely to provide much gain over state-of-the-art cpus connected to as much fast ram as you can find on the market 17:02 < sipa> as the cpu will not be the bottleneck 17:02 < gmaxwell> I think it's generally a poor idea for pow-consensus systems though. My reasoning is that at most you can probably do is get the gap down to 2:1 (or really probably more like 10:1), and even at 2:1 the commodity hardware will probably be completely excluded. 17:02 < gmaxwell> Vs in KDF usage getting the custom hardware advantage down to just 10:1 would be great. 17:02 < sipa> i think it's a great recipe if you want botnets 17:03 < gmaxwell> Well botnets too, if you don't pay for power because you're stealing it you don't mind that you're 10x less efficient than custom hardware. 17:03 < jtimon> I assume it also has to be anti-GPGPU, right? 17:04 < sneak> the nice thing about kdfs is that it's ok to use eleventy bazillion iterations too 17:04 < sneak> because most use-cases don't mind a 500msec wait 17:04 < gmaxwell> well KDFs want fast verification too, but a few hundred ms is okay usually. 17:04 < sipa> jtimon: unless GPUs would happen to have better memory bandwidth :) 17:05 < gmaxwell> They don't want generation / verification asymmetry, which we want for hashcash usage. 17:06 < gmaxwell> sipa: generally GPUs have had much better memory bandwidth than j-random-cpu. (though horrible memory latency relative to their clockrate) 17:07 < jtimon> sipa GPUs will have a better memory bandwith they're not only for graphics anymore, they're the present/near-future of supercomputing 17:07 < jtimon> and some problems have their bottlenecks in memory 17:08 < gmaxwell> jtimon: graphics work is generally memory throughput limited. 17:11 < jtimon> I'm just saying that GPU designers are not only optimizing for graphics, there's more problems being solved with other demands 17:11 < jtimon> GPUs architectures can change 17:12 < jtimon> maybe you're right and the GPGPU people won't ask for those constraints to be improved 17:12 < gmaxwell> jtimon: thats probably the same processor / coprocesor cycle that has gone on since the start of computing. Presumably GPUs will eventually go away and just be subsumed into cpus (or vice versa) 17:13 < gwern> the wheel of reincarnation 17:14 < gmaxwell> (e.g. how FPUs and stand alone short vector units became standard cpu features) 17:14 < jtimon> gmaxwell: interesting prediction, but you've said two options, so that's my point, we can't predict the future of hardware, what architecture are we anti-optimizing against? 17:15 < sipa> i'm not sure it matters 17:15 < jtimon> yeah gmaxwell xmm mmx 17:15 < gmaxwell> jtimon: we? I think it's all stupid regardless. :) 17:16 < gmaxwell> as I said, I don't think arch targeting can prevent there being at least a small constant improvement from dedicated implementations. Since mining is ~near perfect competition that small factor is enough to generally exclude the non-specialized stuff regardless. 17:16 < gmaxwell> And so simple circuits like SHA256 at least improve equality of access.. anyone can design a sha256 asic which is pretty competative, (well if not actually fabricate it themselves) 17:17 < gmaxwell> vs if you really did build something that required AMD scale engineering, then you'd much more likely have a hardware monopoly or near so. 17:17 < gmaxwell> simple fast circuits also have fast verification, which is very helpful too. 17:18 < jtimon> ok, so I see you have even more reasons than me against the "quest for the perfect mining function" 17:19 < gmaxwell> I think that like a lot of things in engineering you can only optimize so far and then its all just messy tradeoffs. 17:19 < sipa> heh, maybe we need an altcoin optimized for ASICs 17:20 < gmaxwell> DES POW. 17:20 < sipa> where the PoW function is has a trivial optimal circuit design 17:20 < jtimon> targeting GPU-friendly but ASIC-hard is specially odd for me since 1) as you said the later doesn't really exists 2) GPUs are already a market with concentrated production (the problem suppesedly solved by "hardness") 17:20 < jtimon> sipa there's one alt named ASICcoin 17:21 < gmaxwell> DES sboxes make for trivial combinitoric logic, it's much slower on current cpus/gpus than it is in direct hardware all other things equal. 17:22 < gmaxwell> the sha256 circuit is really straight forward already. You can get some gains by careful staging to equalize latencies... 17:33 < andytoshi> i have a crazy idea (involving nonexistant crypto) for a research pathway to a SNARK without forge-enabling keying material: http://download.wpsoftware.net/bitcoin/wizardry/public-fhe.pdf 17:34 < andytoshi> throwing it out here because there's probably something obviously dumb about it, and you guys are good at catching that stuff 17:58 < gwern> http://www.reddit.com/r/ethereum/comments/1vh94e/dagger_updates/ 18:30 < jtimon> how "computer hardware" is not "theoretical computer science"? 18:32 < jtimon> oh, not experts in hardware, I missread 21:56 < gmaxwell> There was a puzzle in the MIT mystery hunt that some folks here would like solving. 21:57 < gmaxwell> oh. crud. I guess I can't post it until after the hunt is over, so forget the last line for three days. 23:38 < jcrubino> is it possible to have an address that is both a valid litecoin and bitcoin address? --- Log closed Sat Jan 18 00:00:29 2014