00:14:00 | justanotheruser: | justanotheruser is now known as just[dead] |
00:20:36 | just[dead]: | just[dead] is now known as justanotheruser |
00:24:06 | Ademan: | This was linked in that email thread on cypherpunks someone linked in here a couple of days ago: https://web.archive.org/web/20000325042121/http://www.ait2000.com/egold.htm I thought "They did not realize that merchants and consumers are extremely disinclined to tolerate the slightest inconvenience simply to try something new. ECash, for instance, required the download of client software" was pretty relevant. |
00:38:59 | cpacia: | cpacia has left #bitcoin-wizards |
00:52:05 | cpacia: | cpacia has left #bitcoin-wizards |
02:03:26 | Ademan: | Ademan is now known as Blondie |
02:06:00 | Blondie: | Blondie is now known as Ademan |
02:29:34 | lechuga_: | lechuga_ has left #bitcoin-wizards |
02:57:16 | justanotheruser: | justanotheruser is now known as DOGGE |
02:57:47 | DOGGE: | DOGGE is now known as justanotheruser |
03:15:07 | Guest76153: | Guest76153 is now known as roidster |
04:12:26 | rdponticelli: | rdponticelli has left #bitcoin-wizards |
04:54:02 | emsid: | emsid is now known as mikegoldberg |
05:04:38 | justanotheruser: | justanotheruser is now known as just[dead] |
05:08:53 | just[dead]: | just[dead] is now known as justanotheruser |
05:21:03 | mikegoldberg: | mikegoldberg is now known as emsid |
07:05:39 | iddo_: | iddo_ is now known as iddo |
08:23:51 | michagogo|cloud_: | michagogo|cloud_ is now known as michagogo|cloud |
08:57:16 | wangbus_: | wangbus_ is now known as wangbus |
14:01:17 | justanotheruser: | justanotheruser is now known as just[dead] |
14:13:10 | stonecoldpat: | anyone here found any new bitcoin papers @ conferences ? latest i can find is the multiparty one |
14:18:10 | rastapopuloto: | rastapopuloto has left #bitcoin-wizards |
15:08:59 | roidster: | roidster is now known as Guest26727 |
17:08:11 | maaku: | amiller: so because the high-value-hash highway doesn't include the commitments within the high-value block itself, you can't guarantee that the elided blocks actually represent that stated work |
17:10:05 | maaku: | whereas by commiting to some number of back-links in advance, and then only using those links which are less work back than the apparent work of the block, you can actually guarantee that the compact spv proof couldn't have been made with less work than it purports |
17:14:59 | amiller: | i don't agree, the hvh blocks do include commitments to all of the previous blocks |
17:16:29 | amiller: | let me see if i have pseudocode, i think you're misinterpreting what the proposed scheme was, but i'm also not sure yet what you think it is. |
17:17:42 | maaku: | amiller: I can select a high-value-hash block in the past, build one block on it at the current difficulty, and in that child block link back much further in the history, no? |
17:19:12 | amiller: | uh, yes. if you build on the previous highest value block, then your "back" link would be to that highest-value-hash, and your "up" link would be straight to genesis. |
17:20:42 | maaku: | well it *should* be, but in actuality I could choose something else |
17:21:18 | maaku: | if my intent is to defraud someone |
17:21:28 | amiller: | you don't get to choose |
17:21:37 | amiller: | the rule is your back link always points to the previous block you are building on |
17:22:34 | amiller: | the "up" link always points to the most recent block that is *one larger* than the previous block |
17:23:09 | maaku: | amiller: I understand that it would make my block invalid |
17:23:53 | maaku: | my question is: can I create an *invalid* block which is used in the construction of a fraudulent SPV proof, which nevertheless purports to have more work than is needed to create it |
17:25:57 | amiller: | right, and i dont see how your counter example is working towards that? |
17:26:29 | just[dead]: | just[dead] is now known as justanotheruser |
17:27:16 | amiller: | essentially my SPV proof is probabilistic and done interactively with a full node that has all this data at hand |
17:27:35 | amiller: | the verifier chooses a level of depth basically |
17:27:51 | amiller: | and the prover does the following process which selects the "top layers" of the structure |
17:28:15 | maaku: | amiller: yes, and we're trying to build a compact spv proof which does not assume access to the full block headers/ full node, and which is guaranteed not probabalistic |
17:28:15 | amiller: | first you follow the "up" links until you arrive at the highest value hash (something that points to the genesis) |
17:28:42 | amiller: | uh well good luck with deterministic.... |
17:30:41 | maaku: | amiller: we've already sorted it out (it's in the email) |
17:31:36 | maaku: | by committing to the back links ahead of time, you make sure that the cost of recreating a compact spv proof is >= the cost of creating the chain in the first place |
17:32:28 | amiller: | its still not the slightest bit clear to me what the difference is or how this is better than mine |
17:32:43 | amiller: | the phrase "committing to the back links ahead of time" absolutely describes mine as well |
17:33:26 | amiller: | i'm not saying this because i want you to prefer mine, i'm just trying to figure out how to describe the whole thing more clearly |
17:34:00 | gmaxwell: | amiller: So assume diff=1. Say a diff 100 block shows up. Then sometime after it I make a diff 1 block that has a backlink 50 blocks back to the diff 100 block. |
17:34:31 | gmaxwell: | Except, I'm lying, there has only been three blocks since that diff 100 block, and they're not even included in my commitment. |
17:34:36 | amiller: | gmaxwell, for the sake of clarity could we converge on a couple notation things first |
17:34:42 | amiller: | i distinguish between "value" and "difficulty" |
17:34:55 | amiller: | "value" is the actual number of zeros, difficulty is the target |
17:35:24 | gmaxwell: | lets assume difficulty=1 and lets talk about 1/value which is normalized to difficulty. |
17:35:37 | amiller: | also each block in hvh has two links, a "back" and an "up", i think what is happening here is there are a variety of "back links" and they're chosen by the miner |
17:35:41 | gmaxwell: | rvalue=1 is the worst acceptable block for diff=1 |
17:37:03 | gmaxwell: | what maaku is talking about allows you to take a collection of headers from someone, and then inspect them and conclude that the total work was the same as the total work you would have gotten by linear walking prev... and no false set of headers could be produced which would trick a verifier without doing just as much expected work. |
17:38:40 | amiller: | ok let me think about that for a second, but first expected work is a pretty useless statistic if the variance is high, a major part of my design was to also get a very low variance |
17:39:36 | amiller: | in other words a really really low probability that if the attacker claims W, and actually did W*(1-d), then the probability of success is neglible in d |
17:41:44 | gmaxwell: | you get low variance for this too eventually. In what you were talking about I think your variance is only low in the far past.. But if there were a recent very low block block, it would totally wipe out the chain, which is a bad property for convergence and leads to withholding attacks. |
17:42:30 | amiller: | ok if you think that's the case let me try to understand what your counter example you're saying is because i dont get it yet |
17:42:33 | gmaxwell: | e.g. there is a block right now that has a value lower than the total work in the chain (by several bits) if it were orphaned and then later announced and we used those numbers for chain selection it would totally wipe out the chain between its creation and now. |
17:44:36 | amiller: | you're saying there's a way with my scheme that an orphaned block with a lot of work can be included in the sample, even though it isn't really part of the chain |
17:46:08 | gmaxwell: | Sure, I mine a subsiquent block hand it to you (and its parent), now you look at it and determine the expected work in the data I gave you, vs the real chain. |
17:51:22 | amiller: | hrm, i'm not sure what to say about your example |
17:51:29 | gmaxwell: | lets try another approach. |
17:51:30 | amiller: | the "expected work *in* the chain" is still accurate |
17:51:53 | amiller: | it's just that it's not a valid chain because it includes orphans that aren't in sequence order propoerly |
17:52:08 | amiller: | one thing that my structure (probably yours too) facilitates is finding the intersection point between two chains |
17:52:50 | gmaxwell: | (as an aside, the work computed from comitted threshold hashcash is half that of what you compute from value hashcash.) |
17:55:46 | gmaxwell: | Lets consider a game. I give you two collections of headers and commitments under them under some rule system— A and B. I win the game if I can convince you A has more expected work than B, when if you follow both of them via only the one-at-a-time prev links B has more work than A and while I have done less work than the differece myself. |
17:56:51 | amiller: | well hold on, even there, i want to clarify what you mean by "has more expected work" |
17:57:03 | amiller: | are they both valid chains or they can be arbitrarily corrupt? |
17:57:19 | gmaxwell: | They can be arbitrarily corrupt. |
17:57:37 | gmaxwell: | or better, say just B can be arbitrarily corrupt. |
17:57:47 | gmaxwell: | since if both are dishonest I don't really care. |
17:57:57 | amiller: | say A is possibly corrupt becuase you're convincing me about A |
17:58:00 | gmaxwell: | er, sorry. |
17:58:00 | gmaxwell: | yea. |
17:58:11 | gmaxwell: | confusing myself there. I'm glad you're following. |
17:58:19 | amiller: | so what i'm trying to clarify is how to interpret "shared" work |
17:58:29 | amiller: | because some of the "work contained in A" comes from me, some of it is shared with B |
17:59:15 | amiller: | i think it is essential to determine if they have a common subset |
17:59:24 | amiller: | (possibly the genesis) |
17:59:36 | amiller: | and if so, to then have a way of comparing the work that's in one and not in the other, from that point forward |
17:59:46 | gmaxwell: | sure, you can just assume the last header in both of these proofs is the genesis, to prove it connects. |
17:59:58 | amiller: | no but what i mean is to find the largest common subset |
18:00:49 | amiller: | largest common prefix |
18:05:03 | ielo: | Persopolis, have you ever been to persepolis |
18:10:29 | Persopolis: | I have :) |
18:11:14 | tacotime: | An Iranian contacted me a while ago asking for some of the source code for my alt. I suppose they must have some people there working on this stuff too. |
18:12:36 | Persopolis: | wouldn't be surprised at all - tech savy community who is constantly finding ways to circumvent the gov blocking internet services |
18:12:59 | amiller: | gmaxwell, so the key thing is that every sample used in comparing the work will come *after* the common sequence point |
18:13:11 | amiller: | this is checked by establishing a path from each of those samples to that fork point |
18:13:49 | amiller: | likewise all the samples used to measure that common prefix before the fork point is checked with paths from the fork point |
18:14:02 | amiller: | so there's no chance of any work being double counted before and after the fork point. |
18:14:50 | amiller: | it still remains to argue that you can't double count the same work in A and in B *after* the fork point |
18:14:53 | gmaxwell: | great, and so lets say that A has a orphan block with very low value from the real network as its first non-common point, and then some trivial amount of additional blocks past that fork which I created. |
18:15:24 | amiller: | ok so this is work that's *not* present on B |
18:15:31 | amiller: | but IS included in A? |
18:15:31 | gmaxwell: | yes. |
18:15:36 | gmaxwell: | yes. |
18:15:36 | amiller: | except there's a break in the chain |
18:16:27 | amiller: | okay so i don't see how you're going to deterministically avoid any breaks in the chain without checking all the links, that's outside what i tried to solve with this |
18:16:55 | amiller: | for the same amount of work on your part, you could legitimately build on that orphan and just include an invalid transaction |
18:17:00 | gmaxwell: | okay the SPV game I described is the problem we need to solve to produce compact proofs that one commitment set is superior to another. |
18:17:38 | amiller: | the expected work that is not in B but that was put into A is correctly estimated, the only problem is there's one invalid edge. |
18:19:29 | gmaxwell: | (and not just 'superior' but superior by the block-at-a-time defintion, which is what we actually need (changing the networks defition of best has other problems, e.g. withholding attacks)) |
18:21:27 | amiller: | here's something that i don't think i mentioned in my post or in here but i have taken for granted |
18:21:35 | amiller: | which is that in each block you would commit to the total amount of work represented by that block |
18:21:57 | amiller: | (value doesn't matter, only difficulty, for this) |
18:22:09 | gmaxwell: | sure. Like bitcoin does today. |
18:22:16 | gmaxwell: | or do you mean total past work? |
18:22:19 | amiller: | total past work |
18:22:22 | gmaxwell: | in that case you could lie. |
18:22:43 | amiller: | yes, but, the point is you are still going to check using this procedure |
18:22:49 | amiller: | one thing it helps you with is partitioning becuase |
18:22:58 | amiller: | the previous highest-hash-value roughly divides the total work in half |
18:23:11 | amiller: | i don't think this distinction matters really |
18:23:21 | amiller: | i'm still trying to figure out what you are saying it is you can do with that orphan block |
18:23:42 | gmaxwell: | well the number you get from your procedure is not the same as the total (linear scan) work, e.g. not precise. |
18:23:42 | amiller: | if there's an orphan block and it's honest, then it didn't lie |
18:23:49 | amiller: | if it does lie, then it's part of you the attacker and it's fair game to include it |
18:24:02 | gmaxwell: | the attacker doesn't have to lie about everything. |
18:24:25 | gmaxwell: | it seems to me that you're redefining best work to be best under these proofs rather than best linear work, and that has some horrible side effects. |
18:24:37 | gmaxwell: | e.g. would you really consider the one with the orphan better (assuming its otherwise valid)? |
18:24:51 | gmaxwell: | if so I could withhold the orphan after if get insanely lucky and reorg away all the future work. |
18:24:58 | amiller: | i am describing it as a hypothesis test |
18:25:03 | amiller: | each block commits to a total amount of past work |
18:25:07 | gmaxwell: | e.g. right now the last all time best best block could reorg off several months of work. |
18:25:23 | amiller: | your goal is to do a test to prove that it can't have been conducted except with really low probability and some really close bound to a tie |
18:27:19 | amiller: | no there's no way the all time best block would help you very much in trying to make an attack that reorgs several months of work |
18:28:04 | amiller: | unless the block was made by an attacker and lied about the work behind it (in which case is dishonest, and the expected attacker's work is exactly whta you wanted it to be), AND the person who's being fooled is using an invalid choice of parameters, i.e. only checking 1 level instead of k levels |
18:28:15 | gmaxwell: | well I'm saying that if you e.g. summed 1/values that would have that effect, I'm making sure you're not doing that. |
18:28:26 | amiller: | i don't just sum values for sure |
18:28:33 | amiller: | it's a recursive hypothesis test |
18:28:56 | amiller: | it takes E(W) steps to find a 2^-W hash |
18:29:06 | amiller: | it also takes E(W) steps to find two 2^{W-1} hashes |
18:29:39 | amiller: | but that doesn't mean it takes E(2W) steps to find both, putting in E(W) work you'd expect to find both of those |
18:39:39 | amiller: | also for any given level, it's easy to store the proofs, it doesn't actually require storing all the data or interacting with a verifier |
18:43:05 | gmaxwell: | sure yea, if I didn't make that clear in the game above I intended it to be non-interactive. |
18:45:02 | larslarsen: | I just had a very not well thought out idea for how to protect alts from sudden drop in hashpower |
18:45:40 | larslarsen: | The only solutions are A: hardcode difficulty at N block and hardfork, B: have big holders pay themselves with huge transaction fees to lure mining power |
18:46:58 | gmaxwell: | larslarsen: there is also the nlock thing to keep fees flowing forward. |
18:47:07 | larslarsen: | what if you have a backup chain, of lower security, using a different protocol, that secures the same blocks |
18:47:19 | gmaxwell: | e.g. all users should be nlocktiming their txn for where they'd expect them to get mined so there is always a queue of fees available. |
18:48:15 | larslarsen: | I dont think paying for hashing power you dont need is the answer, its a tiny alt, its difficulty SHOULD be low... if we had some time-based overlapping windows scheme we could fix it, but we dont |
18:49:22 | larslarsen: | so, in my scheme, the chain is forked after the genesis block, and POW on one chain is changed to something else, say UTXO lookup hash. Both chains continue in parallel, storing only the block header, each one with the same merkle in it |
18:50:37 | larslarsen: | If the difficulty on one gets cranked, the other chain keeps solving blocks until enough blocks have been mined to recalculate |
18:50:59 | larslarsen: | its less secure, perhaps... but it doesn't require any human intervention |
18:55:56 | larslarsen: | gmaxwell: a que of fees wont cut it, you're going to have to pay big bucks to get a major pool to jump on something that has zero value. But if the nodes (who have a major interest in blocks completing, even at a lower difficulty) are miners of the second chain, just to keep things moving. |
18:56:24 | gmaxwell: | well I don't worry too much about things with zero value, thats what merged mining is for. |
18:56:32 | gmaxwell: | If you have zero value you have other issues. |
18:56:48 | larslarsen: | what I mean is, the market cap of the coin may not be enough to lure a pool in at the current difficulty |
18:57:29 | larslarsen: | All the coins paid as fees might not be enough |
18:58:18 | gmaxwell: | the problem you run into is that in your effort to make things not suck when they suck you create potential vulnerabilities that exist even when things don't suck. |
18:58:18 | maaku: | larslarsen: demurrage |
18:58:22 | larslarsen: | I don't know how big of a problem it is, but it also kills the problem of UTXO mining being easy to buy. I could kill it with enough money an EC2 instances |
18:58:40 | larslarsen: | but put a regular miner on top of it, and its got best of both |
18:59:28 | maaku: | larslarsen: if there is no value to running a coin, yes it will be insecure. but who cares? it's not valued |
18:59:29 | larslarsen: | oh yeah, I'll make a signed int instead of an unsigned int.... oh wait |
19:00:22 | larslarsen: | maaku: when I said no value, I meant "worth a tiny fraction of coins with its current difficulty level |
19:00:49 | maaku: | larslarsen: what's the connection between difficulty and value |
19:00:51 | maaku: | ? |
19:02:36 | larslarsen: | I was referring to gmaxwell's mention of vulnerabilities every time someone tries to "fix" something |
19:02:54 | larslarsen: | I dont see how a second blockhash could make anything insecure unless I make an implementation fuckup |
19:03:17 | gmaxwell: | it's not having it that makes things insecure, its doing anything with it. :) |
19:03:38 | larslarsen: | Its basically open warfar out there... people will pay a pool to kill you, in the pump, and dump you, in the dump. |
19:03:49 | larslarsen: | Its ridiculous |
19:04:50 | gmaxwell: | It's not clear that bitcoin's kind of consensus can even work at all for worthless things. |
19:05:02 | gmaxwell: | (a fact that I've been pointing out for years) |
19:05:58 | larslarsen: | gmaxwell: granted, but it would be assumed less secure, and if they dont agree it does nothing. If they agree, who cares? If there is one block missing, well who cares? If the weaker chain is cheating it'll just make "nothing happen" as I said, if they dont equal |
19:07:06 | gmaxwell: | larslarsen: because I can isolate you and feed you this 'strictly inferior' thing.. Or not even isolate you, let you use the weaker thing, and then after you've had it in your consensus for a while, mine a bit at full diff and reorg it all out. |
19:08:12 | larslarsen: | True, its a planned blockrace... you have to decide which to trust |
19:08:42 | larslarsen: | I would need to flip to it only when I know that it is completing blocks at a rate faster than the "REal" POW chain |
19:09:08 | larslarsen: | which means actually using one chain :( |
19:09:11 | larslarsen: | ugh, thanks gmaxwell |
19:09:50 | gmaxwell: | no problem, crushing dreams is just part of a wizards work. |
19:09:56 | larslarsen: | Thats why I came here |
19:10:04 | larslarsen: | I new it was not well thought out |
19:11:19 | larslarsen: | The time traveler attack on sliding time window recalculatoin schemes would still work if instead of recalculating, we simply switched to the alternate POW for the next block, and went to that POW's last difficulty? |
19:12:13 | larslarsen: | or is it one of a half dozen new vulnerabilities I just introduced by trying to be clever |
19:12:14 | larslarsen: | ? |
19:12:15 | gmaxwell: | the attacker still has the option. |
19:12:26 | gmaxwell: | so he'll do whatever works best for him. |
19:12:30 | larslarsen: | they would simply get the chance to use a differnt POW sooner right? |
19:12:57 | larslarsen: | you could alternately alternate between POW on every block |
19:13:21 | larslarsen: | even blocks always scrypt+UTXO lookup odd blocks always SHA |
19:13:41 | larslarsen: | if someone solves two SHA's in a row, sorry, you're out |
19:15:05 | gmaxwell: | I'm not sure what you're actually trying to accomplish with that. |
19:15:22 | larslarsen: | assume difficulty on SHA goes to a billion zillion |
19:15:40 | gmaxwell: | Generall hybrid pow just means that there is higher R&D/fixed costs in making an efficient miner, which arguably should create centeralization basis. |
19:16:18 | larslarsen: | gmaxwell: I basically want to keep hashing power relatively level, in an environment where it is used as a weapon |
19:17:26 | larslarsen: | gmaxwell: I am not trying to fix POW, I am trying to find out a way to keep the difficulty appropriate so blocks keep flowing |
19:17:35 | gmaxwell: | adding more POW won't achieve that... as the attackers would just mine all the pows, assuming high power miners exist. |
19:18:04 | larslarsen: | gmaxwell: they would have to have full UTXO and that keeps pools out |
19:18:08 | gmaxwell: | to start, by defintion achieving that end creates isolation vulnerabilties, regardless of how you achieve it. |
19:18:11 | larslarsen: | they just get handed shares |
19:18:22 | wallet421: | wallet421 is now known as wallet42 |
19:18:29 | larslarsen: | not the CONCEPT of pools, but the actual pools we have now |
19:18:41 | gmaxwell: | e.g. I am a network attacker, I isolate some collection of nodes, and I simulate a network that has lost all its hashpower. Now if blocks keep flowing you'll accept transactions which are reversed when I end the isolation. |
19:19:05 | larslarsen: | I see, so if part of the network gets split, you can't just orphan blocks, you have money gone |
19:19:06 | larslarsen: | ok |
19:20:33 | larslarsen: | I think if it comes down to "well if they can crush you with hashpower, they can also replicate your entire node base to attack you" means I've done a pretty good job. |
19:21:14 | larslarsen: | but sadly, I have not |
19:21:39 | larslarsen: | I'll get back to well thought out ideas now.... thanks |
19:29:53 | larslarsen: | I could not listen to it at all. Except as a timing mechanism. When the recalculation window is hit, I see that I processed the expected number of donothing chain blocks, but NO blockchain blocks in that time period, I can recalculate based on a block, which is atomic |
19:29:56 | larslarsen: | its just a heartbeat |
19:30:23 | larslarsen: | since its an empty chain that doesn't need to propagate transactions, it can beat very fast |
19:30:36 | larslarsen: | and its based on validation as POW, so its always happening anyway |
19:32:02 | larslarsen: | and whats best, its a rolling mini chain, because who cares what its tied to |
19:43:41 | phantomcircuit: | jgarzik, is the bipay ssl cert update email legitimate? (i cant see why not but it's worth asking) |
19:57:07 | larslarsen: | Any time traveler is mining on the same block number as everyone else, and his difficulty is set just like everyone else |
20:01:22 | shesek: | I'm wondering... if I have a PoW that is composed from multiple Independent PoW, is it possible to only check some percentage of the Independent PoW and make it so that its still makes the most economical sense to be honest? |
20:02:27 | gmaxwell: | sure, more or less. E.g. by using the hash of a commitment to all your pows to select which ones to reveal, but cumulative POW schemes are not generally progress free. |
20:03:34 | gmaxwell: | e.g. if instead of requring a diff=3bn block we required a proof over 3 billion diff 1 shares, the fastest miner would ~always win. |
20:04:34 | larslarsen: | gmaxwell: Did you see my protect-nothing chain as heartbeat for timestamp idea above? |
20:05:19 | larslarsen: | time traveler gets the same diff as everyone else, and we recalculate on fixed time scale if nodes dont change in number radically |
20:40:14 | jgarzik: | phantomcircuit, probably ;p |
20:54:37 | larslarsen: | Is there a reason why in H(header||nonce||H(utxo_lookup(H(header||nonce))) The utxo lookup contains a hash of the header an nonce? Is that to make it possible to validate without a full UTXO set? |
20:54:49 | larslarsen: | why not just the nonce? |
20:58:39 | phantomcircuit: | jgarzik, lol |
21:02:40 | jgarzik: | phantomcircuit, CTO just created a PGP-signed message for me, if anybody's mind needs to be put at ease. I won't bother taking the time to distribute it unless more people carp ;p |
21:02:54 | jgarzik: | phantomcircuit, so s/probaby/yes/ |
21:18:20 | lnovy: | lnovy is now known as MagicalMole |
21:30:51 | zooko`: | zooko` is now known as zooko |
21:41:42 | MagicalMole: | MagicalMole is now known as lnovy |
21:50:19 | maaku: | jgarzik: I carp ;P |
21:53:13 | jgarzik: | hehe, fine then :) |
21:53:18 | jgarzik: | http://gtf.org/garzik/bitcoin/bitpay-ssl-msg.txt |
21:54:11 | jgarzik: | It is likely signed by pub 2048R/04608C8D 2012-06-07 Stephen Pair |
21:54:13 | jgarzik: | but we'll see! |
22:05:33 | hearn: | jgarzik: you guys are finally killing the dash once and for all, huh |
22:06:09 | jgarzik: | hearn, perhaps. I'm less on the backend side and more on the open source bitcoind/bitcore side. |
22:06:23 | jgarzik: | hearn, speaking of, payment channel client/server coming along nicely: https://github.com/jgarzik/mcp |
22:06:53 | jgarzik: | hearn, Don't sweat JSON-RPC, I'm leaning towards using your .proto |
22:07:05 | jgarzik: | with minor changes :) |
22:07:10 | phantomcircuit: | jgarzik, just an fyi there is something weird going on with the bitpay ai |
22:07:25 | hearn: | ok. will be interesting to test interop when it's done |
22:07:26 | phantomcircuit: | i went back and looked for similar exceptions (the duplicate id thing) and found none |
22:07:47 | phantomcircuit: | i think it's a performance issue which is triggering weird behaviour on our side |
22:07:49 | phantomcircuit: | but none the less it's an issue |
22:07:55 | jgarzik: | hearn, Are you aware of any high level papers, describing protobufs + state machine design pattern? |
22:08:08 | jgarzik: | hearn, I intuitively grasp it, but explaining it to others is a different matter |
22:08:23 | hearn: | papers? no. but a lot of protocols have state machines. i don't think the serialization format has any real relevance to that |
22:08:51 | hearn: | the protobuf based protocols i've designed are not as rigorously consistent as they could be. i think in future i'll use p2proto |
22:08:57 | hearn: | it's a strict request/response model |
22:09:39 | hearn: | but the micropayment channel protocol should be good enough. it's at least been road tested |
22:10:58 | jgarzik: | hearn, High level. Not serialization format, but design pattern. Protobufs presents a domain specific language as a high level, language independent data structure definition tool. This, in turn, creates micro-efficiencies which strongly encourage the programmer to build a more clean, "pure" implementation that can be largely (or entirely) described by those messages, and the state transitions and data transformations leading from that. |
22:11:39 | hearn: | i'm not sure, but it feels like there's probably tools out there to generate protocols automatically from state machine descriptions |
22:11:47 | hearn: | if you find one let me know :) |
22:11:49 | jgarzik: | hearn, Sure that's stating the obvious to a Googler -- but in practice, where you must hand-code marshalling... you also lack other micro-efficiencies that makes testing, simulating and verifying protobufs-generated code quicker than manual coding. |
22:12:08 | jgarzik: | hearn, XDR, decades ago, had some essence of this, but it was very primitive. |
22:12:53 | hearn: | internally google uses a custom RPC stack built on top of protobufs |
22:12:57 | jgarzik: | hearn, if you have foo.proto, it is easy to build fooClient and fooServer, and test their interactions directly, without having to exercise middleware and networking code in the test suite. |
22:13:04 | hearn: | but it doesn't have any notion of a state machine. partially because your remote peer can vanish at any moment |
22:13:10 | jgarzik: | an "internal message bus" can replace TCP/IP sockets etc. |
22:13:21 | jgarzik: | which is handy for testing, not just transport independence |
22:13:49 | hearn: | yes, p2proto is a java library that provides some of this. you define the messages as protobufs still, but then it handles the actual message passing and routing for you. gives a lightweight rpc-like model |
22:15:40 | jgarzik: | cool |
22:15:48 | jgarzik: | I'm sort of reinventing that in JS and C++ |
22:16:21 | hearn: | yeah. i wish google would open source stubby |
22:16:28 | jgarzik: | Making easy-to-use protobuf-message-client and protobuf-message-server classes, because I have immediately reuse needs in that area |
22:16:59 | hearn: | there are quite a lot of libraries to do this already. perhaps you can find one that already exists |
22:17:09 | hearn: | https://code.google.com/p/protobuf/wiki/ThirdPartyAddOns |
22:17:18 | hearn: | javascript is harder to find though |
22:18:00 | jgarzik: | Thanks. Knew about most of these... http://code.google.com/p/server1/ is new and might be interesting. |
22:18:20 | jgarzik: | The C++ ones tend to love boost a little too much for my tastes. |
22:18:25 | justanotheruser: | Is there any way to make a transaction payable to someone who iterated though and hashed X values? On thing I can think of is only paying them if they find my number which I tell them is between 1 and 10000000000 but that requires trust in me that their hash is going |
22:18:34 | justanotheruser: | to evaluate to the winning hash |
22:18:56 | zooko: | jgarzik: also possible of interest: http://kentonv.github.io/capnproto/otherlang.html |
22:19:16 | gmaxwell: | justanotheruser: if you don't mind doing the computation yourself first, you can just ask them to find the preimage of a hash. |
22:19:21 | zooko: | If something other than protobuf is an option. |
22:19:26 | jgarzik: | zooko, sure |
22:19:54 | jgarzik: | zooko, I'm more a fan of the design pattern, using a domain-specific language to make designing network protocols easier |
22:19:55 | justanotheruser: | gmaxwell: I do mind it. I want to have paid timelock encryption. |
22:20:37 | hearn: | jgarzik: ever check out JetBrains MPS? |
22:21:08 | justanotheruser: | So it would be a ton of computation |
22:21:16 | zooko: | jgarzik: yeah, I like that. |
22:21:39 | gmaxwell: | justanotheruser: just take whatever time lock encryption scheme you want to use, and first encrypt the hash preimage with it. Then do the aformentioned hashlocked transaction, and accompany it with a zero knoweldge proof that the encrypted value is the preimage of that hash encrypted under your timelock scheme. |
22:21:45 | gmaxwell: | |
22:21:47 | hearn: | jgarzik: it's a framework for making DSLs. would be ideal for that kind of thing. though i think an RPC stack is really what you need there |
22:22:16 | justanotheruser: | gmaxwell: heh, I need to learn more about zero knowledge proofs |
22:22:19 | hearn: | jgarzik: the bitcoinj micropayment channel code is not primarily concerned with network or rpc level stuff. it's big and hairy because the steps themselves are complicated and you need a ton of error/input checking |
22:22:52 | jgarzik: | hearn, indeed |
22:23:22 | justanotheruser: | gmaxwell: Didn't you make a zero knowledge proof like that? Or was it something else. I know it was like 80mb though |
22:23:30 | jgarzik: | hearn, I'm coding payment channels and also simultaneously thinking ahead -- we will have a lot of these "little" agreement protocols to build in the future, in bitcoin-land |
22:23:44 | jgarzik: | hearn, it seems worthwhile to consider how wallets will plug into such micro-protocols |
22:24:00 | hearn: | yeah. agreed. that's one reason i was experimenting with MPS. i could never quite convince myself that a "real" DSL was worth the cost though, even with amazing tools to build them |
22:24:59 | hearn: | the channels code in bitcoinj plugs into the wallet by providing a "wallet extension" object, which is allowed to store arbitrary blobs into a serialized wallet file. so it stores channel data there. also registers event handlers and uses the API to broadcast transactions, etc. |
22:25:15 | hearn: | the separation works OK. it's not perfectly clean but pretty good |
22:25:34 | zooko: | jgarzik: in that case, CapnProto might be a good tool. |
22:25:55 | jgarzik: | so noted. Time for beer, pizza, politics, and maybe some discussion of D&D. |
22:26:03 | zooko: | Heh heh. |
22:26:04 | zooko: | Sounds good |
22:26:15 | hearn: | enjoy! |
22:27:37 | maaku: | jgarzik: more D&D, less politics! have fun |
22:29:20 | mike4: | mike4 is now known as c--O-O |
23:35:14 | rdymac_: | rdymac_ is now known as rdymac |