The new on-disk cache implementaion

Chris Bennett chris at ceegeebee.com
Sat Apr 25 23:11:04 PDT 2015


I'm right in the middle of trying to build git HEAD on *any* Linux distro
and having problems.  I've just submitted a pull request for a E_BUSY error
that comes up on centos/debian.

Now I'm hitting this on centos6/7 & debian6/7/8:

aes128-cbc-speed1.cc:58:68: error: no matching function for call to
'callback(CryptoSpeed*, void (CryptoSpeed::*)())'
aes128-cbc-speed1.cc:61:98: error: no matching function for call to
'callback(CryptoSpeed*, void (CryptoSpeed::*)())'
aes128-cbc-speed1.cc:77:68: error: no matching function for call to
'callback(CryptoSpeed*, void (CryptoSpeed::*)(Event))'
aes128-cbc-speed1.cc:89:68: error: no matching function for call to
'callback(CryptoSpeed*, void (CryptoSpeed::*)())'

Any ideas..?

Chris

On 26 April 2015 at 14:55, Ahmed Al -Ghafri <al-ghafri at hotmail.com> wrote:

> That's great Juli, let me try your new updated implementation then give
> you feedback. I am wondering if two WANproxy machines can be put  in
> between a WAN link so that they are doing optimization in a bridge mode,
> where there will be no need to touch the IP configurations in the existing
> network. Is that possible to be achieved?  By that we can have two great
> modes, proxy and bridge.
>
> > From: juli at clockworksquid.com
> > Date: Fri, 24 Apr 2015 23:20:07 -0700
> > Subject: Re: The new on-disk cache implementaion
> > To: al-ghafri at hotmail.com
> > CC: wanproxy at lists.wanproxy.org
>
> >
> > Ahmed,
> >
> > I went through several incomplete implementations that predate
> > Diego's, and I have plans to extend it beyond his work; I wanted to
> > start from a design that would extend to support the features and
> > functionality I intend to include, and some that were needed today,
> > including the ability to share a single on-disk cache between multiple
> > peers.
> >
> > Upload and download both go into the cache, but they do not share
> > data, at least not yet. So a segment from one peer will not be used
> > to deduplicate data from another peer. Whether this is done in the
> > future is an open question; it raises a lot of issues about
> > configurations with many-to-many relationships. It might be worth
> > having a configuration parameter to share a cache for local and remote
> > segments in one-to-one configurations.
> >
> > Let me know if your issue persists with the latest code.
> >
> > Thanks,
> > Juli.
> >
> > On Fri, Apr 24, 2015 at 10:09 PM, Ahmed Al -Ghafri
> > <al-ghafri at hotmail.com> wrote:
> > > Hello Juli,
> > >
> > > Excellent and great advance in WANProxy for this month. Finally,
> on-disk
> > > cache is in progress
> > > to be supported officially. I wanted to ask, what is the difference
> between
> > > your on-disk cache implementation
> > > and Deigo implementation? I mean why you started from scratch, and not
> build
> > > on what Deigo has done?
> > >
> > > Another thing, in the current implementation, is the on-disk cache
> works two
> > > ways direction, I mean upload/download both are considered to fill the
> > > cache?
> > >
> > > BTW, last time I faced a problem showing error:[/zlib/inflate_pipe]
> ERR:
> > > virtual void InflatePipe::consume
> > > If you can help I would be appreciated; here is the link:
> > >
> http://lists.wanproxy.org/pipermail/wanproxy-wanproxy.org/2015-January/001555.html
> > >
> > > Regards,
> > > Ahmed
>
> _______________________________________________
> wanproxy mailing list
> wanproxy at lists.wanproxy.org
> http://lists.wanproxy.org/listinfo.cgi/wanproxy-wanproxy.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wanproxy.org/pipermail/wanproxy-wanproxy.org/attachments/20150426/f7313685/attachment.htm>


More information about the wanproxy mailing list