00:14:00justanotheruser:justanotheruser is now known as just[dead]
00:20:36just[dead]:just[dead] is now known as justanotheruser
00:24:06Ademan:This was linked in that email thread on cypherpunks someone linked in here a couple of days ago: https://web.archive.org/web/20000325042121/http://www.ait2000.com/egold.htm I thought "They did not realize that merchants and consumers are extremely disinclined to tolerate the slightest inconvenience simply to try something new. ECash, for instance, required the download of client software" was pretty relevant.
00:38:59cpacia:cpacia has left #bitcoin-wizards
00:52:05cpacia:cpacia has left #bitcoin-wizards
02:03:26Ademan:Ademan is now known as Blondie
02:06:00Blondie:Blondie is now known as Ademan
02:29:34lechuga_:lechuga_ has left #bitcoin-wizards
02:57:16justanotheruser:justanotheruser is now known as DOGGE
02:57:47DOGGE:DOGGE is now known as justanotheruser
03:15:07Guest76153:Guest76153 is now known as roidster
04:12:26rdponticelli:rdponticelli has left #bitcoin-wizards
04:54:02emsid:emsid is now known as mikegoldberg
05:04:38justanotheruser:justanotheruser is now known as just[dead]
05:08:53just[dead]:just[dead] is now known as justanotheruser
05:21:03mikegoldberg:mikegoldberg is now known as emsid
07:05:39iddo_:iddo_ is now known as iddo
08:23:51michagogo|cloud_:michagogo|cloud_ is now known as michagogo|cloud
08:57:16wangbus_:wangbus_ is now known as wangbus
14:01:17justanotheruser:justanotheruser is now known as just[dead]
14:13:10stonecoldpat:anyone here found any new bitcoin papers @ conferences ? latest i can find is the multiparty one
14:18:10rastapopuloto:rastapopuloto has left #bitcoin-wizards
15:08:59roidster:roidster is now known as Guest26727
17:08:11maaku:amiller: so because the high-value-hash highway doesn't include the commitments within the high-value block itself, you can't guarantee that the elided blocks actually represent that stated work
17:10:05maaku:whereas by commiting to some number of back-links in advance, and then only using those links which are less work back than the apparent work of the block, you can actually guarantee that the compact spv proof couldn't have been made with less work than it purports
17:14:59amiller:i don't agree, the hvh blocks do include commitments to all of the previous blocks
17:16:29amiller:let me see if i have pseudocode, i think you're misinterpreting what the proposed scheme was, but i'm also not sure yet what you think it is.
17:17:42maaku:amiller: I can select a high-value-hash block in the past, build one block on it at the current difficulty, and in that child block link back much further in the history, no?
17:19:12amiller:uh, yes. if you build on the previous highest value block, then your "back" link would be to that highest-value-hash, and your "up" link would be straight to genesis.
17:20:42maaku:well it *should* be, but in actuality I could choose something else
17:21:18maaku:if my intent is to defraud someone
17:21:28amiller:you don't get to choose
17:21:37amiller:the rule is your back link always points to the previous block you are building on
17:22:34amiller:the "up" link always points to the most recent block that is *one larger* than the previous block
17:23:09maaku:amiller: I understand that it would make my block invalid
17:23:53maaku:my question is: can I create an *invalid* block which is used in the construction of a fraudulent SPV proof, which nevertheless purports to have more work than is needed to create it
17:25:57amiller:right, and i dont see how your counter example is working towards that?
17:26:29just[dead]:just[dead] is now known as justanotheruser
17:27:16amiller:essentially my SPV proof is probabilistic and done interactively with a full node that has all this data at hand
17:27:35amiller:the verifier chooses a level of depth basically
17:27:51amiller:and the prover does the following process which selects the "top layers" of the structure
17:28:15maaku:amiller: yes, and we're trying to build a compact spv proof which does not assume access to the full block headers/ full node, and which is guaranteed not probabalistic
17:28:15amiller:first you follow the "up" links until you arrive at the highest value hash (something that points to the genesis)
17:28:42amiller:uh well good luck with deterministic....
17:30:41maaku:amiller: we've already sorted it out (it's in the email)
17:31:36maaku:by committing to the back links ahead of time, you make sure that the cost of recreating a compact spv proof is >= the cost of creating the chain in the first place
17:32:28amiller:its still not the slightest bit clear to me what the difference is or how this is better than mine
17:32:43amiller:the phrase "committing to the back links ahead of time" absolutely describes mine as well
17:33:26amiller:i'm not saying this because i want you to prefer mine, i'm just trying to figure out how to describe the whole thing more clearly
17:34:00gmaxwell:amiller: So assume diff=1. Say a diff 100 block shows up. Then sometime after it I make a diff 1 block that has a backlink 50 blocks back to the diff 100 block.
17:34:31gmaxwell:Except, I'm lying, there has only been three blocks since that diff 100 block, and they're not even included in my commitment.
17:34:36amiller:gmaxwell, for the sake of clarity could we converge on a couple notation things first
17:34:42amiller:i distinguish between "value" and "difficulty"
17:34:55amiller:"value" is the actual number of zeros, difficulty is the target
17:35:24gmaxwell:lets assume difficulty=1 and lets talk about 1/value which is normalized to difficulty.
17:35:37amiller:also each block in hvh has two links, a "back" and an "up", i think what is happening here is there are a variety of "back links" and they're chosen by the miner
17:35:41gmaxwell:rvalue=1 is the worst acceptable block for diff=1
17:37:03gmaxwell:what maaku is talking about allows you to take a collection of headers from someone, and then inspect them and conclude that the total work was the same as the total work you would have gotten by linear walking prev... and no false set of headers could be produced which would trick a verifier without doing just as much expected work.
17:38:40amiller:ok let me think about that for a second, but first expected work is a pretty useless statistic if the variance is high, a major part of my design was to also get a very low variance
17:39:36amiller:in other words a really really low probability that if the attacker claims W, and actually did W*(1-d), then the probability of success is neglible in d
17:41:44gmaxwell:you get low variance for this too eventually. In what you were talking about I think your variance is only low in the far past.. But if there were a recent very low block block, it would totally wipe out the chain, which is a bad property for convergence and leads to withholding attacks.
17:42:30amiller:ok if you think that's the case let me try to understand what your counter example you're saying is because i dont get it yet
17:42:33gmaxwell:e.g. there is a block right now that has a value lower than the total work in the chain (by several bits) if it were orphaned and then later announced and we used those numbers for chain selection it would totally wipe out the chain between its creation and now.
17:44:36amiller:you're saying there's a way with my scheme that an orphaned block with a lot of work can be included in the sample, even though it isn't really part of the chain
17:46:08gmaxwell:Sure, I mine a subsiquent block hand it to you (and its parent), now you look at it and determine the expected work in the data I gave you, vs the real chain.
17:51:22amiller:hrm, i'm not sure what to say about your example
17:51:29gmaxwell:lets try another approach.
17:51:30amiller:the "expected work *in* the chain" is still accurate
17:51:53amiller:it's just that it's not a valid chain because it includes orphans that aren't in sequence order propoerly
17:52:08amiller:one thing that my structure (probably yours too) facilitates is finding the intersection point between two chains
17:52:50gmaxwell:(as an aside, the work computed from comitted threshold hashcash is half that of what you compute from value hashcash.)
17:55:46gmaxwell:Lets consider a game. I give you two collections of headers and commitments under them under some rule system— A and B. I win the game if I can convince you A has more expected work than B, when if you follow both of them via only the one-at-a-time prev links B has more work than A and while I have done less work than the differece myself.
17:56:51amiller:well hold on, even there, i want to clarify what you mean by "has more expected work"
17:57:03amiller:are they both valid chains or they can be arbitrarily corrupt?
17:57:19gmaxwell:They can be arbitrarily corrupt.
17:57:37gmaxwell:or better, say just B can be arbitrarily corrupt.
17:57:47gmaxwell:since if both are dishonest I don't really care.
17:57:57amiller:say A is possibly corrupt becuase you're convincing me about A
17:58:00gmaxwell:er, sorry.
17:58:11gmaxwell:confusing myself there. I'm glad you're following.
17:58:19amiller:so what i'm trying to clarify is how to interpret "shared" work
17:58:29amiller:because some of the "work contained in A" comes from me, some of it is shared with B
17:59:15amiller:i think it is essential to determine if they have a common subset
17:59:24amiller:(possibly the genesis)
17:59:36amiller:and if so, to then have a way of comparing the work that's in one and not in the other, from that point forward
17:59:46gmaxwell:sure, you can just assume the last header in both of these proofs is the genesis, to prove it connects.
17:59:58amiller:no but what i mean is to find the largest common subset
18:00:49amiller:largest common prefix
18:05:03ielo:Persopolis, have you ever been to persepolis
18:10:29Persopolis:I have :)
18:11:14tacotime:An Iranian contacted me a while ago asking for some of the source code for my alt. I suppose they must have some people there working on this stuff too.
18:12:36Persopolis:wouldn't be surprised at all - tech savy community who is constantly finding ways to circumvent the gov blocking internet services
18:12:59amiller:gmaxwell, so the key thing is that every sample used in comparing the work will come *after* the common sequence point
18:13:11amiller:this is checked by establishing a path from each of those samples to that fork point
18:13:49amiller:likewise all the samples used to measure that common prefix before the fork point is checked with paths from the fork point
18:14:02amiller:so there's no chance of any work being double counted before and after the fork point.
18:14:50amiller:it still remains to argue that you can't double count the same work in A and in B *after* the fork point
18:14:53gmaxwell:great, and so lets say that A has a orphan block with very low value from the real network as its first non-common point, and then some trivial amount of additional blocks past that fork which I created.
18:15:24amiller:ok so this is work that's *not* present on B
18:15:31amiller:but IS included in A?
18:15:36amiller:except there's a break in the chain
18:16:27amiller:okay so i don't see how you're going to deterministically avoid any breaks in the chain without checking all the links, that's outside what i tried to solve with this
18:16:55amiller:for the same amount of work on your part, you could legitimately build on that orphan and just include an invalid transaction
18:17:00gmaxwell:okay the SPV game I described is the problem we need to solve to produce compact proofs that one commitment set is superior to another.
18:17:38amiller:the expected work that is not in B but that was put into A is correctly estimated, the only problem is there's one invalid edge.
18:19:29gmaxwell:(and not just 'superior' but superior by the block-at-a-time defintion, which is what we actually need (changing the networks defition of best has other problems, e.g. withholding attacks))
18:21:27amiller:here's something that i don't think i mentioned in my post or in here but i have taken for granted
18:21:35amiller:which is that in each block you would commit to the total amount of work represented by that block
18:21:57amiller:(value doesn't matter, only difficulty, for this)
18:22:09gmaxwell:sure. Like bitcoin does today.
18:22:16gmaxwell:or do you mean total past work?
18:22:19amiller:total past work
18:22:22gmaxwell:in that case you could lie.
18:22:43amiller:yes, but, the point is you are still going to check using this procedure
18:22:49amiller:one thing it helps you with is partitioning becuase
18:22:58amiller:the previous highest-hash-value roughly divides the total work in half
18:23:11amiller:i don't think this distinction matters really
18:23:21amiller:i'm still trying to figure out what you are saying it is you can do with that orphan block
18:23:42gmaxwell:well the number you get from your procedure is not the same as the total (linear scan) work, e.g. not precise.
18:23:42amiller:if there's an orphan block and it's honest, then it didn't lie
18:23:49amiller:if it does lie, then it's part of you the attacker and it's fair game to include it
18:24:02gmaxwell:the attacker doesn't have to lie about everything.
18:24:25gmaxwell:it seems to me that you're redefining best work to be best under these proofs rather than best linear work, and that has some horrible side effects.
18:24:37gmaxwell:e.g. would you really consider the one with the orphan better (assuming its otherwise valid)?
18:24:51gmaxwell:if so I could withhold the orphan after if get insanely lucky and reorg away all the future work.
18:24:58amiller:i am describing it as a hypothesis test
18:25:03amiller:each block commits to a total amount of past work
18:25:07gmaxwell:e.g. right now the last all time best best block could reorg off several months of work.
18:25:23amiller:your goal is to do a test to prove that it can't have been conducted except with really low probability and some really close bound to a tie
18:27:19amiller:no there's no way the all time best block would help you very much in trying to make an attack that reorgs several months of work
18:28:04amiller:unless the block was made by an attacker and lied about the work behind it (in which case is dishonest, and the expected attacker's work is exactly whta you wanted it to be), AND the person who's being fooled is using an invalid choice of parameters, i.e. only checking 1 level instead of k levels
18:28:15gmaxwell:well I'm saying that if you e.g. summed 1/values that would have that effect, I'm making sure you're not doing that.
18:28:26amiller:i don't just sum values for sure
18:28:33amiller:it's a recursive hypothesis test
18:28:56amiller:it takes E(W) steps to find a 2^-W hash
18:29:06amiller:it also takes E(W) steps to find two 2^{W-1} hashes
18:29:39amiller:but that doesn't mean it takes E(2W) steps to find both, putting in E(W) work you'd expect to find both of those
18:39:39amiller:also for any given level, it's easy to store the proofs, it doesn't actually require storing all the data or interacting with a verifier
18:43:05gmaxwell:sure yea, if I didn't make that clear in the game above I intended it to be non-interactive.
18:45:02larslarsen:I just had a very not well thought out idea for how to protect alts from sudden drop in hashpower
18:45:40larslarsen:The only solutions are A: hardcode difficulty at N block and hardfork, B: have big holders pay themselves with huge transaction fees to lure mining power
18:46:58gmaxwell:larslarsen: there is also the nlock thing to keep fees flowing forward.
18:47:07larslarsen:what if you have a backup chain, of lower security, using a different protocol, that secures the same blocks
18:47:19gmaxwell:e.g. all users should be nlocktiming their txn for where they'd expect them to get mined so there is always a queue of fees available.
18:48:15larslarsen:I dont think paying for hashing power you dont need is the answer, its a tiny alt, its difficulty SHOULD be low... if we had some time-based overlapping windows scheme we could fix it, but we dont
18:49:22larslarsen:so, in my scheme, the chain is forked after the genesis block, and POW on one chain is changed to something else, say UTXO lookup hash. Both chains continue in parallel, storing only the block header, each one with the same merkle in it
18:50:37larslarsen:If the difficulty on one gets cranked, the other chain keeps solving blocks until enough blocks have been mined to recalculate
18:50:59larslarsen:its less secure, perhaps... but it doesn't require any human intervention
18:55:56larslarsen:gmaxwell: a que of fees wont cut it, you're going to have to pay big bucks to get a major pool to jump on something that has zero value. But if the nodes (who have a major interest in blocks completing, even at a lower difficulty) are miners of the second chain, just to keep things moving.
18:56:24gmaxwell:well I don't worry too much about things with zero value, thats what merged mining is for.
18:56:32gmaxwell:If you have zero value you have other issues.
18:56:48larslarsen:what I mean is, the market cap of the coin may not be enough to lure a pool in at the current difficulty
18:57:29larslarsen:All the coins paid as fees might not be enough
18:58:18gmaxwell:the problem you run into is that in your effort to make things not suck when they suck you create potential vulnerabilities that exist even when things don't suck.
18:58:18maaku:larslarsen: demurrage
18:58:22larslarsen:I don't know how big of a problem it is, but it also kills the problem of UTXO mining being easy to buy. I could kill it with enough money an EC2 instances
18:58:40larslarsen:but put a regular miner on top of it, and its got best of both
18:59:28maaku:larslarsen: if there is no value to running a coin, yes it will be insecure. but who cares? it's not valued
18:59:29larslarsen:oh yeah, I'll make a signed int instead of an unsigned int.... oh wait
19:00:22larslarsen:maaku: when I said no value, I meant "worth a tiny fraction of coins with its current difficulty level
19:00:49maaku:larslarsen: what's the connection between difficulty and value
19:02:36larslarsen:I was referring to gmaxwell's mention of vulnerabilities every time someone tries to "fix" something
19:02:54larslarsen:I dont see how a second blockhash could make anything insecure unless I make an implementation fuckup
19:03:17gmaxwell:it's not having it that makes things insecure, its doing anything with it. :)
19:03:38larslarsen:Its basically open warfar out there... people will pay a pool to kill you, in the pump, and dump you, in the dump.
19:03:49larslarsen:Its ridiculous
19:04:50gmaxwell:It's not clear that bitcoin's kind of consensus can even work at all for worthless things.
19:05:02gmaxwell:(a fact that I've been pointing out for years)
19:05:58larslarsen:gmaxwell: granted, but it would be assumed less secure, and if they dont agree it does nothing. If they agree, who cares? If there is one block missing, well who cares? If the weaker chain is cheating it'll just make "nothing happen" as I said, if they dont equal
19:07:06gmaxwell:larslarsen: because I can isolate you and feed you this 'strictly inferior' thing.. Or not even isolate you, let you use the weaker thing, and then after you've had it in your consensus for a while, mine a bit at full diff and reorg it all out.
19:08:12larslarsen:True, its a planned blockrace... you have to decide which to trust
19:08:42larslarsen:I would need to flip to it only when I know that it is completing blocks at a rate faster than the "REal" POW chain
19:09:08larslarsen:which means actually using one chain :(
19:09:11larslarsen:ugh, thanks gmaxwell
19:09:50gmaxwell:no problem, crushing dreams is just part of a wizards work.
19:09:56larslarsen:Thats why I came here
19:10:04larslarsen:I new it was not well thought out
19:11:19larslarsen:The time traveler attack on sliding time window recalculatoin schemes would still work if instead of recalculating, we simply switched to the alternate POW for the next block, and went to that POW's last difficulty?
19:12:13larslarsen:or is it one of a half dozen new vulnerabilities I just introduced by trying to be clever
19:12:15gmaxwell:the attacker still has the option.
19:12:26gmaxwell:so he'll do whatever works best for him.
19:12:30larslarsen:they would simply get the chance to use a differnt POW sooner right?
19:12:57larslarsen:you could alternately alternate between POW on every block
19:13:21larslarsen:even blocks always scrypt+UTXO lookup odd blocks always SHA
19:13:41larslarsen:if someone solves two SHA's in a row, sorry, you're out
19:15:05gmaxwell:I'm not sure what you're actually trying to accomplish with that.
19:15:22larslarsen:assume difficulty on SHA goes to a billion zillion
19:15:40gmaxwell:Generall hybrid pow just means that there is higher R&D/fixed costs in making an efficient miner, which arguably should create centeralization basis.
19:16:18larslarsen:gmaxwell: I basically want to keep hashing power relatively level, in an environment where it is used as a weapon
19:17:26larslarsen:gmaxwell: I am not trying to fix POW, I am trying to find out a way to keep the difficulty appropriate so blocks keep flowing
19:17:35gmaxwell:adding more POW won't achieve that... as the attackers would just mine all the pows, assuming high power miners exist.
19:18:04larslarsen:gmaxwell: they would have to have full UTXO and that keeps pools out
19:18:08gmaxwell:to start, by defintion achieving that end creates isolation vulnerabilties, regardless of how you achieve it.
19:18:11larslarsen:they just get handed shares
19:18:22wallet421:wallet421 is now known as wallet42
19:18:29larslarsen:not the CONCEPT of pools, but the actual pools we have now
19:18:41gmaxwell:e.g. I am a network attacker, I isolate some collection of nodes, and I simulate a network that has lost all its hashpower. Now if blocks keep flowing you'll accept transactions which are reversed when I end the isolation.
19:19:05larslarsen:I see, so if part of the network gets split, you can't just orphan blocks, you have money gone
19:20:33larslarsen:I think if it comes down to "well if they can crush you with hashpower, they can also replicate your entire node base to attack you" means I've done a pretty good job.
19:21:14larslarsen:but sadly, I have not
19:21:39larslarsen:I'll get back to well thought out ideas now.... thanks
19:29:53larslarsen:I could not listen to it at all. Except as a timing mechanism. When the recalculation window is hit, I see that I processed the expected number of donothing chain blocks, but NO blockchain blocks in that time period, I can recalculate based on a block, which is atomic
19:29:56larslarsen:its just a heartbeat
19:30:23larslarsen:since its an empty chain that doesn't need to propagate transactions, it can beat very fast
19:30:36larslarsen:and its based on validation as POW, so its always happening anyway
19:32:02larslarsen:and whats best, its a rolling mini chain, because who cares what its tied to
19:43:41phantomcircuit:jgarzik, is the bipay ssl cert update email legitimate? (i cant see why not but it's worth asking)
19:57:07larslarsen:Any time traveler is mining on the same block number as everyone else, and his difficulty is set just like everyone else
20:01:22shesek:I'm wondering... if I have a PoW that is composed from multiple Independent PoW, is it possible to only check some percentage of the Independent PoW and make it so that its still makes the most economical sense to be honest?
20:02:27gmaxwell:sure, more or less. E.g. by using the hash of a commitment to all your pows to select which ones to reveal, but cumulative POW schemes are not generally progress free.
20:03:34gmaxwell:e.g. if instead of requring a diff=3bn block we required a proof over 3 billion diff 1 shares, the fastest miner would ~always win.
20:04:34larslarsen:gmaxwell: Did you see my protect-nothing chain as heartbeat for timestamp idea above?
20:05:19larslarsen:time traveler gets the same diff as everyone else, and we recalculate on fixed time scale if nodes dont change in number radically
20:40:14jgarzik:phantomcircuit, probably ;p
20:54:37larslarsen:Is there a reason why in H(header||nonce||H(utxo_lookup(H(header||nonce))) The utxo lookup contains a hash of the header an nonce? Is that to make it possible to validate without a full UTXO set?
20:54:49larslarsen:why not just the nonce?
20:58:39phantomcircuit:jgarzik, lol
21:02:40jgarzik:phantomcircuit, CTO just created a PGP-signed message for me, if anybody's mind needs to be put at ease. I won't bother taking the time to distribute it unless more people carp ;p
21:02:54jgarzik:phantomcircuit, so s/probaby/yes/
21:18:20lnovy:lnovy is now known as MagicalMole
21:30:51zooko`:zooko` is now known as zooko
21:41:42MagicalMole:MagicalMole is now known as lnovy
21:50:19maaku:jgarzik: I carp ;P
21:53:13jgarzik:hehe, fine then :)
21:54:11jgarzik:It is likely signed by pub 2048R/04608C8D 2012-06-07 Stephen Pair
21:54:13jgarzik:but we'll see!
22:05:33hearn:jgarzik: you guys are finally killing the dash once and for all, huh
22:06:09jgarzik:hearn, perhaps. I'm less on the backend side and more on the open source bitcoind/bitcore side.
22:06:23jgarzik:hearn, speaking of, payment channel client/server coming along nicely: https://github.com/jgarzik/mcp
22:06:53jgarzik:hearn, Don't sweat JSON-RPC, I'm leaning towards using your .proto
22:07:05jgarzik:with minor changes :)
22:07:10phantomcircuit:jgarzik, just an fyi there is something weird going on with the bitpay ai
22:07:25hearn:ok. will be interesting to test interop when it's done
22:07:26phantomcircuit:i went back and looked for similar exceptions (the duplicate id thing) and found none
22:07:47phantomcircuit:i think it's a performance issue which is triggering weird behaviour on our side
22:07:49phantomcircuit:but none the less it's an issue
22:07:55jgarzik:hearn, Are you aware of any high level papers, describing protobufs + state machine design pattern?
22:08:08jgarzik:hearn, I intuitively grasp it, but explaining it to others is a different matter
22:08:23hearn:papers? no. but a lot of protocols have state machines. i don't think the serialization format has any real relevance to that
22:08:51hearn:the protobuf based protocols i've designed are not as rigorously consistent as they could be. i think in future i'll use p2proto
22:08:57hearn:it's a strict request/response model
22:09:39hearn:but the micropayment channel protocol should be good enough. it's at least been road tested
22:10:58jgarzik:hearn, High level. Not serialization format, but design pattern. Protobufs presents a domain specific language as a high level, language independent data structure definition tool. This, in turn, creates micro-efficiencies which strongly encourage the programmer to build a more clean, "pure" implementation that can be largely (or entirely) described by those messages, and the state transitions and data transformations leading from that.
22:11:39hearn:i'm not sure, but it feels like there's probably tools out there to generate protocols automatically from state machine descriptions
22:11:47hearn:if you find one let me know :)
22:11:49jgarzik:hearn, Sure that's stating the obvious to a Googler -- but in practice, where you must hand-code marshalling... you also lack other micro-efficiencies that makes testing, simulating and verifying protobufs-generated code quicker than manual coding.
22:12:08jgarzik:hearn, XDR, decades ago, had some essence of this, but it was very primitive.
22:12:53hearn:internally google uses a custom RPC stack built on top of protobufs
22:12:57jgarzik:hearn, if you have foo.proto, it is easy to build fooClient and fooServer, and test their interactions directly, without having to exercise middleware and networking code in the test suite.
22:13:04hearn:but it doesn't have any notion of a state machine. partially because your remote peer can vanish at any moment
22:13:10jgarzik:an "internal message bus" can replace TCP/IP sockets etc.
22:13:21jgarzik:which is handy for testing, not just transport independence
22:13:49hearn:yes, p2proto is a java library that provides some of this. you define the messages as protobufs still, but then it handles the actual message passing and routing for you. gives a lightweight rpc-like model
22:15:48jgarzik:I'm sort of reinventing that in JS and C++
22:16:21hearn:yeah. i wish google would open source stubby
22:16:28jgarzik:Making easy-to-use protobuf-message-client and protobuf-message-server classes, because I have immediately reuse needs in that area
22:16:59hearn:there are quite a lot of libraries to do this already. perhaps you can find one that already exists
22:17:18hearn:javascript is harder to find though
22:18:00jgarzik:Thanks. Knew about most of these... http://code.google.com/p/server1/ is new and might be interesting.
22:18:20jgarzik:The C++ ones tend to love boost a little too much for my tastes.
22:18:25justanotheruser:Is there any way to make a transaction payable to someone who iterated though and hashed X values? On thing I can think of is only paying them if they find my number which I tell them is between 1 and 10000000000 but that requires trust in me that their hash is going
22:18:34justanotheruser:to evaluate to the winning hash
22:18:56zooko:jgarzik: also possible of interest: http://kentonv.github.io/capnproto/otherlang.html
22:19:16gmaxwell:justanotheruser: if you don't mind doing the computation yourself first, you can just ask them to find the preimage of a hash.
22:19:21zooko:If something other than protobuf is an option.
22:19:26jgarzik:zooko, sure
22:19:54jgarzik:zooko, I'm more a fan of the design pattern, using a domain-specific language to make designing network protocols easier
22:19:55justanotheruser:gmaxwell: I do mind it. I want to have paid timelock encryption.
22:20:37hearn:jgarzik: ever check out JetBrains MPS?
22:21:08justanotheruser:So it would be a ton of computation
22:21:16zooko:jgarzik: yeah, I like that.
22:21:39gmaxwell:justanotheruser: just take whatever time lock encryption scheme you want to use, and first encrypt the hash preimage with it. Then do the aformentioned hashlocked transaction, and accompany it with a zero knoweldge proof that the encrypted value is the preimage of that hash encrypted under your timelock scheme.
22:21:47hearn:jgarzik: it's a framework for making DSLs. would be ideal for that kind of thing. though i think an RPC stack is really what you need there
22:22:16justanotheruser:gmaxwell: heh, I need to learn more about zero knowledge proofs
22:22:19hearn:jgarzik: the bitcoinj micropayment channel code is not primarily concerned with network or rpc level stuff. it's big and hairy because the steps themselves are complicated and you need a ton of error/input checking
22:22:52jgarzik:hearn, indeed
22:23:22justanotheruser:gmaxwell: Didn't you make a zero knowledge proof like that? Or was it something else. I know it was like 80mb though
22:23:30jgarzik:hearn, I'm coding payment channels and also simultaneously thinking ahead -- we will have a lot of these "little" agreement protocols to build in the future, in bitcoin-land
22:23:44jgarzik:hearn, it seems worthwhile to consider how wallets will plug into such micro-protocols
22:24:00hearn:yeah. agreed. that's one reason i was experimenting with MPS. i could never quite convince myself that a "real" DSL was worth the cost though, even with amazing tools to build them
22:24:59hearn:the channels code in bitcoinj plugs into the wallet by providing a "wallet extension" object, which is allowed to store arbitrary blobs into a serialized wallet file. so it stores channel data there. also registers event handlers and uses the API to broadcast transactions, etc.
22:25:15hearn:the separation works OK. it's not perfectly clean but pretty good
22:25:34zooko:jgarzik: in that case, CapnProto might be a good tool.
22:25:55jgarzik:so noted. Time for beer, pizza, politics, and maybe some discussion of D&D.
22:26:03zooko:Heh heh.
22:26:04zooko:Sounds good
22:27:37maaku:jgarzik: more D&D, less politics! have fun
22:29:20mike4:mike4 is now known as c--O-O
23:35:14rdymac_:rdymac_ is now known as rdymac