--- Log opened Wed Mar 27 00:00:08 2013 16:58 < sipa> converting C++ to C is boooooring 17:00 < jgarzik> hehehe 17:00 < sipa> and changing a += b; into secp256k1_ge_add(&a, &a, &b); does hurt... 17:01 < warren> sipa: resulting code won't be any slower though, right? 17:01 < sipa> no 17:02 < sipa> btw, a friend of mine contributed x86_64 assembly for a few low-level routines: 20% speedup! 17:02 < warren> nice =) 17:20 < petertodd> sipa: Are you planning on eventually getting rid of the libgmp dep in your secp implementation? 17:20 < sipa> perhaps, yes 17:20 < warren> petertodd: I'm hoping he gets rid of openssl 17:21 < sipa> petertodd: that does mean writing our own mini bigint code, though 17:22 < sipa> which is somewhat stupid if very well optimized alternatives exist 17:22 < petertodd> What are the dependencies like for those well optimized alternatives? 17:22 < sipa> gmp :) 17:22 < petertodd> Ha 17:23 < sipa> and gmp doesn't have dependencies of its own 17:23 < petertodd> I'm working on a really preliminary design for a "merkleized forth"; it should have it's core written in C I'm thinking with no dependencies for easy auditing/running on microprocessors. 17:26 < sipa> eh, relevance? 17:27 < petertodd> I'll need a fast secp256k1 implementation eventually, and probably a bigint implementation too, ideally ones that don't depend on malloc. 18:12 < warren> gmaxwell: regarding p2pool and your idea of share fork merging. There is a potential flaw in the share fork merging idea that I can't think of a solution. Say you allow a share to have up to 4 parents. If colluding buddy nodes own one of those parallel shares, what incentive do they have to relay competing parallel shares if any block solution they come up with is valid? They're better off excluding competing parallel shares as much as p 18:12 < warren> ossible. It would be difficult for the network to detect. 18:15 < warren> Hmm, I suppose it might work if the post-merge shares can be orphaned by another post-merge share that has more parents. 18:16 < warren> But are we then back with the original problem... 18:16 < gmaxwell> warren: as was already said before: the chain chain with the most difficulty wins. 18:17 < gmaxwell> and yea, you do end up with a circular issue there, I wasn't sure how to solve that. 18:18 < warren> gmaxwell: wouldn't this also exacerbate the frequency of new work? Every time your p2pool node receives a parallel share, you would have to restart mining? 18:19 < warren> If so we didn't really solve any problem here. 18:19 < gmaxwell> No. You use it if you have it. 18:19 < gmaxwell> The other work is late you usually have it from the prior cycle. 18:20 < warren> I'm not following. Whenever you receive a new latest share, work restarts at that moment no? 18:21 < gmaxwell> warren: I'm concerned that you're using the word 'restart' 18:22 < gmaxwell> You switch to work based on that, sure. 18:23 < warren> And isn't that switch where we currently have the work return latency issue? 18:23 < warren> Your local node switches work after you receive a new tip share. 18:23 < warren> "Late" shares come in, parallel to your tip share. 18:24 < gmaxwell> Yes, but prior to that happening you've recieved some straggling shares from other peers. 18:24 < gmaxwell> Late shares came in during the prior interval. 18:25 < warren> I might be missing something crucial in understanding this. 18:26 < gmaxwell> The late is merging not shares that were competitors for the current head but shares which were competitors for the prior one. 18:28 < warren> How does that avoid switching work more often than 10 seconds? 18:30 < gmaxwell> What would you switch to? Height-100 comes in, you compute work based on H100 that merges H99 competition. If you get any H99 work after you recieve H100, you reject it. 18:31 < warren> So H101 merges the paralell H99's. 18:32 < warren> And p2pool would need to accept any share solutions that come in a short while (maybe 2 seconds) after work switches 18:46 < warren> argh, this isn't as easy as I thought --- Log closed Thu Mar 28 00:00:09 2013