This commit is contained in:
2018-11-04 15:58:15 +01:00
commit f956bcee28
1178 changed files with 584552 additions and 0 deletions

373
vendor/github.com/anacrolix/torrent/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

34
vendor/github.com/anacrolix/torrent/Peer.go generated vendored Normal file
View File

@@ -0,0 +1,34 @@
package torrent
import (
"net"
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/torrent/peer_protocol"
)
type Peer struct {
Id [20]byte
IP net.IP
Port int
Source peerSource
// Peer is known to support encryption.
SupportsEncryption bool
peer_protocol.PexPeerFlags
}
func (me *Peer) FromPex(na krpc.NodeAddr, fs peer_protocol.PexPeerFlags) {
me.IP = append([]byte(nil), na.IP...)
me.Port = na.Port
me.Source = peerSourcePEX
// If they prefer encryption, they must support it.
if fs.Get(peer_protocol.PexPrefersEncryption) {
me.SupportsEncryption = true
}
me.PexPeerFlags = fs
}
func (me Peer) addr() ipPort {
return ipPort{me.IP, uint16(me.Port)}
}

35
vendor/github.com/anacrolix/torrent/Peers.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package torrent
import (
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/torrent/peer_protocol"
"github.com/anacrolix/torrent/tracker"
)
type Peers []Peer
func (me *Peers) AppendFromPex(nas []krpc.NodeAddr, fs []peer_protocol.PexPeerFlags) {
for i, na := range nas {
var p Peer
var f peer_protocol.PexPeerFlags
if i < len(fs) {
f = fs[i]
}
p.FromPex(na, f)
*me = append(*me, p)
}
}
func (ret Peers) AppendFromTracker(ps []tracker.Peer) Peers {
for _, p := range ps {
_p := Peer{
IP: p.IP,
Port: p.Port,
Source: peerSourceTracker,
}
copy(_p.Id[:], p.ID)
ret = append(ret, _p)
}
return ret
}

85
vendor/github.com/anacrolix/torrent/README.md generated vendored Normal file
View File

@@ -0,0 +1,85 @@
# torrent
[![Join the chat at https://gitter.im/anacrolix/torrent](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/anacrolix/torrent?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![GoDoc](https://godoc.org/github.com/anacrolix/torrent?status.svg)](https://godoc.org/github.com/anacrolix/torrent)
[![CircleCI](https://circleci.com/gh/anacrolix/torrent.svg?style=shield)](https://circleci.com/gh/anacrolix/torrent)
This repository implements BitTorrent-related packages and command-line utilities in Go. The emphasis is on use as a library from other projects. It's been used 24/7 in production by downstream services since late 2014. The implementation was specifically created to explore Go's concurrency capabilities, and to include the ability to stream data directly from the BitTorrent network. To this end it [supports seeking, readaheads and other features](https://godoc.org/github.com/anacrolix/torrent#Reader) exposing torrents and their files with the various Go idiomatic `io` package interfaces. This is also demonstrated through [torrentfs](#torrentfs).
There is [support for protocol encryption, DHT, PEX, uTP, and various extensions](https://godoc.org/github.com/anacrolix/torrent). There are [several data storage backends provided](https://godoc.org/github.com/anacrolix/torrent/storage): blob, file, bolt, and mmap, to name a few. You can [write your own](https://godoc.org/github.com/anacrolix/torrent/storage#ClientImpl) to store data for example on S3, or in a database.
Some noteworthy package dependencies that can be used for other purposes include:
* [go-libutp](https://github.com/anacrolix/go-libutp)
* [dht](https://github.com/anacrolix/dht)
* [bencode](https://godoc.org/github.com/anacrolix/torrent/bencode)
* [tracker](https://godoc.org/github.com/anacrolix/torrent/tracker)
## Installation
Install the library package with `go get github.com/anacrolix/torrent`, or the provided cmds with `go get github.com/anacrolix/torrent/cmd/...`.
## Library examples
There are some small [examples](https://godoc.org/github.com/anacrolix/torrent#pkg-examples) in the package documentation.
## Downstream projects
There are several web-frontends and Android clients among the known public projects:
* [Torrent.Express](https://torrent.express/)
* [Confluence](https://github.com/anacrolix/confluence)
* [Trickl](https://github.com/arranlomas/Trickl)
* [Elementum](http://elementum.surge.sh/)
* [goTorrent](https://github.com/deranjer/goTorrent)
* [Go Peerflix](https://github.com/Sioro-Neoku/go-peerflix)
* [Cloud Torrent](https://github.com/jpillora/cloud-torrent)
* [Android Torrent Client](https://gitlab.com/axet/android-torrent-client)
* [libtorrent](https://gitlab.com/axet/libtorrent)
* [Remote-Torrent](https://github.com/BruceWangNo1/remote-torrent)
## Help
Communication about the project is primarily through [Gitter](https://gitter.im/anacrolix/torrent) and the [issue tracker](https://github.com/anacrolix/torrent/issues).
## Command packages
Here I'll describe what some of the packages in `./cmd` do.
Note that the [`godo`](https://github.com/anacrolix/godo) command which is invoked in the following examples builds and executes a Go import path, like `go run`. It's easier to use this convention than to spell out the install/invoke cycle for every single example.
### torrent
Downloads torrents from the command-line. This first example does not use `godo`.
$ go get github.com/anacrolix/torrent/cmd/torrent
# Now 'torrent' should be in $GOPATH/bin, which should be in $PATH.
$ torrent 'magnet:?xt=urn:btih:KRWPCX3SJUM4IMM4YF5RPHL6ANPYTQPU'
ubuntu-14.04.2-desktop-amd64.iso [===================================================================>] 99% downloading (1.0 GB/1.0 GB)
2015/04/01 02:08:20 main.go:137: downloaded ALL the torrents
$ md5sum ubuntu-14.04.2-desktop-amd64.iso
1b305d585b1918f297164add46784116 ubuntu-14.04.2-desktop-amd64.iso
$ echo such amaze
wow
### torrentfs
torrentfs mounts a FUSE filesystem at `-mountDir`. The contents are the torrents described by the torrent files and magnet links at `-metainfoDir`. Data for read requests is fetched only as required from the torrent network, and stored at `-downloadDir`.
$ mkdir mnt torrents
$ godo github.com/anacrolix/torrent/cmd/torrentfs -mountDir=mnt -metainfoDir=torrents &
$ cd torrents
$ wget http://releases.ubuntu.com/14.04.2/ubuntu-14.04.2-desktop-amd64.iso.torrent
$ cd ..
$ ls mnt
ubuntu-14.04.2-desktop-amd64.iso
$ pv mnt/ubuntu-14.04.2-desktop-amd64.iso | md5sum
996MB 0:04:40 [3.55MB/s] [========================================>] 100%
1b305d585b1918f297164add46784116 -
### torrent-magnet
Creates a magnet link from a torrent file. Note the extracted trackers, display name, and info hash.
$ godo github.com/anacrolix/torrent/cmd/torrent-magnet < ubuntu-14.04.2-desktop-amd64.iso.torrent
magnet:?xt=urn:btih:546cf15f724d19c4319cc17b179d7e035f89c1f4&dn=ubuntu-14.04.2-desktop-amd64.iso&tr=http%3A%2F%2Ftorrent.ubuntu.com%3A6969%2Fannounce&tr=http%3A%2F%2Fipv6.torrent.ubuntu.com%3A6969%2Fannounce

5
vendor/github.com/anacrolix/torrent/TODO generated vendored Normal file
View File

@@ -0,0 +1,5 @@
* Make use of sparse file regions in download data for faster hashing. This is available as whence 3 and 4 on some OSs?
* When we're choked and interested, are we not interested if there's no longer anything that we want?
* dht: Randomize triedAddrs bloom filter to allow different Addr sets on each Announce.
* data/blob: Deleting incomplete data triggers io.ErrUnexpectedEOF that isn't recovered from.
* Handle wanted pieces more efficiently, it's slow in in fillRequests, since the prioritization system was changed.

57
vendor/github.com/anacrolix/torrent/bad_storage.go generated vendored Normal file
View File

@@ -0,0 +1,57 @@
package torrent
import (
"errors"
"math/rand"
"strings"
"github.com/anacrolix/torrent/metainfo"
"github.com/anacrolix/torrent/storage"
)
type badStorage struct{}
var _ storage.ClientImpl = badStorage{}
func (bs badStorage) OpenTorrent(*metainfo.Info, metainfo.Hash) (storage.TorrentImpl, error) {
return bs, nil
}
func (bs badStorage) Close() error {
return nil
}
func (bs badStorage) Piece(p metainfo.Piece) storage.PieceImpl {
return badStoragePiece{p}
}
type badStoragePiece struct {
p metainfo.Piece
}
var _ storage.PieceImpl = badStoragePiece{}
func (p badStoragePiece) WriteAt(b []byte, off int64) (int, error) {
return 0, nil
}
func (p badStoragePiece) Completion() storage.Completion {
return storage.Completion{Complete: true, Ok: true}
}
func (p badStoragePiece) MarkComplete() error {
return errors.New("psyyyyyyyche")
}
func (p badStoragePiece) MarkNotComplete() error {
return errors.New("psyyyyyyyche")
}
func (p badStoragePiece) randomlyTruncatedDataString() string {
return "hello, world\n"[:rand.Intn(14)]
}
func (p badStoragePiece) ReadAt(b []byte, off int64) (n int, err error) {
r := strings.NewReader(p.randomlyTruncatedDataString())
return r.ReadAt(b, off+p.p.Offset())
}

38
vendor/github.com/anacrolix/torrent/bencode/README.md generated vendored Normal file
View File

@@ -0,0 +1,38 @@
Bencode encoding/decoding sub package. Uses similar API design to Go's json package.
## Install
```sh
go get github.com/anacrolix/torrent
```
## Usage
```go
package demo
import (
bencode "github.com/anacrolix/torrent/bencode"
)
type Message struct {
Query string `json:"q,omitempty" bencode:"q,omitempty"`
}
var v Message
func main(){
// encode
data, err := bencode.Marshal(v)
if err != nil {
log.Fatal(err)
}
//decode
err := bencode.Unmarshal(data, &v)
if err != nil {
log.Fatal(err)
}
fmt.Println(v)
}
```

157
vendor/github.com/anacrolix/torrent/bencode/api.go generated vendored Normal file
View File

@@ -0,0 +1,157 @@
package bencode
import (
"bytes"
"fmt"
"io"
"reflect"
"github.com/anacrolix/missinggo/expect"
)
//----------------------------------------------------------------------------
// Errors
//----------------------------------------------------------------------------
// In case if marshaler cannot encode a type, it will return this error. Typical
// example of such type is float32/float64 which has no bencode representation.
type MarshalTypeError struct {
Type reflect.Type
}
func (e *MarshalTypeError) Error() string {
return "bencode: unsupported type: " + e.Type.String()
}
// Unmarshal argument must be a non-nil value of some pointer type.
type UnmarshalInvalidArgError struct {
Type reflect.Type
}
func (e *UnmarshalInvalidArgError) Error() string {
if e.Type == nil {
return "bencode: Unmarshal(nil)"
}
if e.Type.Kind() != reflect.Ptr {
return "bencode: Unmarshal(non-pointer " + e.Type.String() + ")"
}
return "bencode: Unmarshal(nil " + e.Type.String() + ")"
}
// Unmarshaler spotted a value that was not appropriate for a given Go value.
type UnmarshalTypeError struct {
Value string
Type reflect.Type
}
func (e *UnmarshalTypeError) Error() string {
return "bencode: value (" + e.Value + ") is not appropriate for type: " +
e.Type.String()
}
// Unmarshaler tried to write to an unexported (therefore unwritable) field.
type UnmarshalFieldError struct {
Key string
Type reflect.Type
Field reflect.StructField
}
func (e *UnmarshalFieldError) Error() string {
return "bencode: key \"" + e.Key + "\" led to an unexported field \"" +
e.Field.Name + "\" in type: " + e.Type.String()
}
// Malformed bencode input, unmarshaler failed to parse it.
type SyntaxError struct {
Offset int64 // location of the error
What error // error description
}
func (e *SyntaxError) Error() string {
return fmt.Sprintf("bencode: syntax error (offset: %d): %s", e.Offset, e.What)
}
// A non-nil error was returned after calling MarshalBencode on a type which
// implements the Marshaler interface.
type MarshalerError struct {
Type reflect.Type
Err error
}
func (e *MarshalerError) Error() string {
return "bencode: error calling MarshalBencode for type " + e.Type.String() + ": " + e.Err.Error()
}
// A non-nil error was returned after calling UnmarshalBencode on a type which
// implements the Unmarshaler interface.
type UnmarshalerError struct {
Type reflect.Type
Err error
}
func (e *UnmarshalerError) Error() string {
return "bencode: error calling UnmarshalBencode for type " + e.Type.String() + ": " + e.Err.Error()
}
//----------------------------------------------------------------------------
// Interfaces
//----------------------------------------------------------------------------
// Any type which implements this interface, will be marshaled using the
// specified method.
type Marshaler interface {
MarshalBencode() ([]byte, error)
}
// Any type which implements this interface, will be unmarshaled using the
// specified method.
type Unmarshaler interface {
UnmarshalBencode([]byte) error
}
// Marshal the value 'v' to the bencode form, return the result as []byte and
// an error if any.
func Marshal(v interface{}) ([]byte, error) {
var buf bytes.Buffer
e := Encoder{w: &buf}
err := e.Encode(v)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func MustMarshal(v interface{}) []byte {
b, err := Marshal(v)
expect.Nil(err)
return b
}
// Unmarshal the bencode value in the 'data' to a value pointed by the 'v'
// pointer, return a non-nil error if any.
func Unmarshal(data []byte, v interface{}) (err error) {
buf := bytes.NewBuffer(data)
e := Decoder{r: buf}
err = e.Decode(v)
if err == nil && buf.Len() != 0 {
err = ErrUnusedTrailingBytes{buf.Len()}
}
return
}
type ErrUnusedTrailingBytes struct {
NumUnusedBytes int
}
func (me ErrUnusedTrailingBytes) Error() string {
return fmt.Sprintf("%d unused trailing bytes", me.NumUnusedBytes)
}
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: &scanner{r: r}}
}
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{w: w}
}

18
vendor/github.com/anacrolix/torrent/bencode/bytes.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
package bencode
type Bytes []byte
var (
_ Unmarshaler = &Bytes{}
_ Marshaler = &Bytes{}
_ Marshaler = Bytes{}
)
func (me *Bytes) UnmarshalBencode(b []byte) error {
*me = append([]byte(nil), b...)
return nil
}
func (me Bytes) MarshalBencode() ([]byte, error) {
return me, nil
}

650
vendor/github.com/anacrolix/torrent/bencode/decode.go generated vendored Normal file
View File

@@ -0,0 +1,650 @@
package bencode
import (
"bytes"
"errors"
"fmt"
"io"
"math/big"
"reflect"
"runtime"
"strconv"
"sync"
)
type Decoder struct {
r interface {
io.ByteScanner
io.Reader
}
// Sum of bytes used to Decode values.
Offset int64
buf bytes.Buffer
}
func (d *Decoder) Decode(v interface{}) (err error) {
defer func() {
if err != nil {
return
}
r := recover()
_, ok := r.(runtime.Error)
if ok {
panic(r)
}
err, ok = r.(error)
if !ok && r != nil {
panic(r)
}
}()
pv := reflect.ValueOf(v)
if pv.Kind() != reflect.Ptr || pv.IsNil() {
return &UnmarshalInvalidArgError{reflect.TypeOf(v)}
}
ok, err := d.parseValue(pv.Elem())
if err != nil {
return
}
if !ok {
d.throwSyntaxError(d.Offset-1, errors.New("unexpected 'e'"))
}
return
}
func checkForUnexpectedEOF(err error, offset int64) {
if err == io.EOF {
panic(&SyntaxError{
Offset: offset,
What: io.ErrUnexpectedEOF,
})
}
}
func (d *Decoder) readByte() byte {
b, err := d.r.ReadByte()
if err != nil {
checkForUnexpectedEOF(err, d.Offset)
panic(err)
}
d.Offset++
return b
}
// reads data writing it to 'd.buf' until 'sep' byte is encountered, 'sep' byte
// is consumed, but not included into the 'd.buf'
func (d *Decoder) readUntil(sep byte) {
for {
b := d.readByte()
if b == sep {
return
}
d.buf.WriteByte(b)
}
}
func checkForIntParseError(err error, offset int64) {
if err != nil {
panic(&SyntaxError{
Offset: offset,
What: err,
})
}
}
func (d *Decoder) throwSyntaxError(offset int64, err error) {
panic(&SyntaxError{
Offset: offset,
What: err,
})
}
// called when 'i' was consumed
func (d *Decoder) parseInt(v reflect.Value) {
start := d.Offset - 1
d.readUntil('e')
if d.buf.Len() == 0 {
panic(&SyntaxError{
Offset: start,
What: errors.New("empty integer value"),
})
}
s := bytesAsString(d.buf.Bytes())
switch v.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
n, err := strconv.ParseInt(s, 10, 64)
checkForIntParseError(err, start)
if v.OverflowInt(n) {
panic(&UnmarshalTypeError{
Value: "integer " + s,
Type: v.Type(),
})
}
v.SetInt(n)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
n, err := strconv.ParseUint(s, 10, 64)
checkForIntParseError(err, start)
if v.OverflowUint(n) {
panic(&UnmarshalTypeError{
Value: "integer " + s,
Type: v.Type(),
})
}
v.SetUint(n)
case reflect.Bool:
v.SetBool(s != "0")
default:
panic(&UnmarshalTypeError{
Value: "integer " + s,
Type: v.Type(),
})
}
d.buf.Reset()
}
func (d *Decoder) parseString(v reflect.Value) error {
start := d.Offset - 1
// read the string length first
d.readUntil(':')
length, err := strconv.ParseInt(bytesAsString(d.buf.Bytes()), 10, 0)
checkForIntParseError(err, start)
defer d.buf.Reset()
read := func(b []byte) {
n, err := io.ReadFull(d.r, b)
d.Offset += int64(n)
if err != nil {
checkForUnexpectedEOF(err, d.Offset)
panic(&SyntaxError{
Offset: d.Offset,
What: errors.New("unexpected I/O error: " + err.Error()),
})
}
}
switch v.Kind() {
case reflect.String:
b := make([]byte, length)
read(b)
v.SetString(bytesAsString(b))
return nil
case reflect.Slice:
if v.Type().Elem().Kind() != reflect.Uint8 {
break
}
b := make([]byte, length)
read(b)
v.SetBytes(b)
return nil
case reflect.Array:
if v.Type().Elem().Kind() != reflect.Uint8 {
break
}
d.buf.Grow(int(length))
b := d.buf.Bytes()[:length]
read(b)
reflect.Copy(v, reflect.ValueOf(b))
return nil
}
d.buf.Grow(int(length))
read(d.buf.Bytes()[:length])
// I believe we return here to support "ignore_unmarshal_type_error".
return &UnmarshalTypeError{
Value: "string",
Type: v.Type(),
}
}
// Info for parsing a dict value.
type dictField struct {
Value reflect.Value // Storage for the parsed value.
// True if field value should be parsed into Value. If false, the value
// should be parsed and discarded.
Ok bool
Set func() // Call this after parsing into Value.
IgnoreUnmarshalTypeError bool
}
// Returns specifics for parsing a dict field value.
func getDictField(dict reflect.Value, key string) dictField {
// get valuev as a map value or as a struct field
switch dict.Kind() {
case reflect.Map:
value := reflect.New(dict.Type().Elem()).Elem()
return dictField{
Value: value,
Ok: true,
Set: func() {
if dict.IsNil() {
dict.Set(reflect.MakeMap(dict.Type()))
}
// Assigns the value into the map.
dict.SetMapIndex(reflect.ValueOf(key).Convert(dict.Type().Key()), value)
},
}
case reflect.Struct:
sf, ok := getStructFieldForKey(dict.Type(), key)
if !ok {
return dictField{}
}
if sf.r.PkgPath != "" {
panic(&UnmarshalFieldError{
Key: key,
Type: dict.Type(),
Field: sf.r,
})
}
return dictField{
Value: dict.FieldByIndex(sf.r.Index),
Ok: true,
Set: func() {},
IgnoreUnmarshalTypeError: sf.tag.IgnoreUnmarshalTypeError(),
}
default:
return dictField{}
}
}
type structField struct {
r reflect.StructField
tag tag
}
var (
structFieldsMu sync.Mutex
structFields = map[reflect.Type]map[string]structField{}
)
func parseStructFields(struct_ reflect.Type, each func(string, structField)) {
for i, n := 0, struct_.NumField(); i < n; i++ {
f := struct_.Field(i)
if f.Anonymous {
continue
}
tagStr := f.Tag.Get("bencode")
if tagStr == "-" {
continue
}
tag := parseTag(tagStr)
key := tag.Key()
if key == "" {
key = f.Name
}
each(key, structField{f, tag})
}
}
func saveStructFields(struct_ reflect.Type) {
m := make(map[string]structField)
parseStructFields(struct_, func(key string, sf structField) {
m[key] = sf
})
structFields[struct_] = m
}
func getStructFieldForKey(struct_ reflect.Type, key string) (f structField, ok bool) {
structFieldsMu.Lock()
if _, ok := structFields[struct_]; !ok {
saveStructFields(struct_)
}
f, ok = structFields[struct_][key]
structFieldsMu.Unlock()
return
}
func (d *Decoder) parseDict(v reflect.Value) error {
// so, at this point 'd' byte was consumed, let's just read key/value
// pairs one by one
for {
var keyStr string
keyValue := reflect.ValueOf(&keyStr).Elem()
ok, err := d.parseValue(keyValue)
if err != nil {
return fmt.Errorf("error parsing dict key: %s", err)
}
if !ok {
return nil
}
df := getDictField(v, keyStr)
// now we need to actually parse it
if df.Ok {
// log.Printf("parsing ok struct field for key %q", keyStr)
ok, err = d.parseValue(df.Value)
} else {
// Discard the value, there's nowhere to put it.
var if_ interface{}
if_, ok = d.parseValueInterface()
if if_ == nil {
err = fmt.Errorf("error parsing value for key %q", keyStr)
}
}
if err != nil {
if _, ok := err.(*UnmarshalTypeError); !ok || !df.IgnoreUnmarshalTypeError {
return fmt.Errorf("parsing value for key %q: %s", keyStr, err)
}
}
if !ok {
return fmt.Errorf("missing value for key %q", keyStr)
}
if df.Ok {
df.Set()
}
}
}
func (d *Decoder) parseList(v reflect.Value) error {
switch v.Kind() {
case reflect.Array, reflect.Slice:
default:
panic(&UnmarshalTypeError{
Value: "array",
Type: v.Type(),
})
}
i := 0
for ; ; i++ {
if v.Kind() == reflect.Slice && i >= v.Len() {
v.Set(reflect.Append(v, reflect.Zero(v.Type().Elem())))
}
if i < v.Len() {
ok, err := d.parseValue(v.Index(i))
if err != nil {
return err
}
if !ok {
break
}
} else {
_, ok := d.parseValueInterface()
if !ok {
break
}
}
}
if i < v.Len() {
if v.Kind() == reflect.Array {
z := reflect.Zero(v.Type().Elem())
for n := v.Len(); i < n; i++ {
v.Index(i).Set(z)
}
} else {
v.SetLen(i)
}
}
if i == 0 && v.Kind() == reflect.Slice {
v.Set(reflect.MakeSlice(v.Type(), 0, 0))
}
return nil
}
func (d *Decoder) readOneValue() bool {
b, err := d.r.ReadByte()
if err != nil {
panic(err)
}
if b == 'e' {
d.r.UnreadByte()
return false
} else {
d.Offset++
d.buf.WriteByte(b)
}
switch b {
case 'd', 'l':
// read until there is nothing to read
for d.readOneValue() {
}
// consume 'e' as well
b = d.readByte()
d.buf.WriteByte(b)
case 'i':
d.readUntil('e')
d.buf.WriteString("e")
default:
if b >= '0' && b <= '9' {
start := d.buf.Len() - 1
d.readUntil(':')
length, err := strconv.ParseInt(bytesAsString(d.buf.Bytes()[start:]), 10, 64)
checkForIntParseError(err, d.Offset-1)
d.buf.WriteString(":")
n, err := io.CopyN(&d.buf, d.r, length)
d.Offset += n
if err != nil {
checkForUnexpectedEOF(err, d.Offset)
panic(&SyntaxError{
Offset: d.Offset,
What: errors.New("unexpected I/O error: " + err.Error()),
})
}
break
}
d.raiseUnknownValueType(b, d.Offset-1)
}
return true
}
func (d *Decoder) parseUnmarshaler(v reflect.Value) bool {
if !v.Type().Implements(unmarshalerType) {
if v.Addr().Type().Implements(unmarshalerType) {
v = v.Addr()
} else {
return false
}
}
d.buf.Reset()
if !d.readOneValue() {
return false
}
m := v.Interface().(Unmarshaler)
err := m.UnmarshalBencode(d.buf.Bytes())
if err != nil {
panic(&UnmarshalerError{v.Type(), err})
}
return true
}
// Returns true if there was a value and it's now stored in 'v', otherwise
// there was an end symbol ("e") and no value was stored.
func (d *Decoder) parseValue(v reflect.Value) (bool, error) {
// we support one level of indirection at the moment
if v.Kind() == reflect.Ptr {
// if the pointer is nil, allocate a new element of the type it
// points to
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
v = v.Elem()
}
if d.parseUnmarshaler(v) {
return true, nil
}
// common case: interface{}
if v.Kind() == reflect.Interface && v.NumMethod() == 0 {
iface, _ := d.parseValueInterface()
v.Set(reflect.ValueOf(iface))
return true, nil
}
b, err := d.r.ReadByte()
if err != nil {
panic(err)
}
d.Offset++
switch b {
case 'e':
return false, nil
case 'd':
return true, d.parseDict(v)
case 'l':
return true, d.parseList(v)
case 'i':
d.parseInt(v)
return true, nil
default:
if b >= '0' && b <= '9' {
// It's a string.
d.buf.Reset()
// Write the first digit of the length to the buffer.
d.buf.WriteByte(b)
return true, d.parseString(v)
}
d.raiseUnknownValueType(b, d.Offset-1)
}
panic("unreachable")
}
// An unknown bencode type character was encountered.
func (d *Decoder) raiseUnknownValueType(b byte, offset int64) {
panic(&SyntaxError{
Offset: offset,
What: fmt.Errorf("unknown value type %+q", b),
})
}
func (d *Decoder) parseValueInterface() (interface{}, bool) {
b, err := d.r.ReadByte()
if err != nil {
panic(err)
}
d.Offset++
switch b {
case 'e':
return nil, false
case 'd':
return d.parseDictInterface(), true
case 'l':
return d.parseListInterface(), true
case 'i':
return d.parseIntInterface(), true
default:
if b >= '0' && b <= '9' {
// string
// append first digit of the length to the buffer
d.buf.WriteByte(b)
return d.parseStringInterface(), true
}
d.raiseUnknownValueType(b, d.Offset-1)
panic("unreachable")
}
}
func (d *Decoder) parseIntInterface() (ret interface{}) {
start := d.Offset - 1
d.readUntil('e')
if d.buf.Len() == 0 {
panic(&SyntaxError{
Offset: start,
What: errors.New("empty integer value"),
})
}
n, err := strconv.ParseInt(d.buf.String(), 10, 64)
if ne, ok := err.(*strconv.NumError); ok && ne.Err == strconv.ErrRange {
i := new(big.Int)
_, ok := i.SetString(d.buf.String(), 10)
if !ok {
panic(&SyntaxError{
Offset: start,
What: errors.New("failed to parse integer"),
})
}
ret = i
} else {
checkForIntParseError(err, start)
ret = n
}
d.buf.Reset()
return
}
func (d *Decoder) parseStringInterface() interface{} {
start := d.Offset - 1
// read the string length first
d.readUntil(':')
length, err := strconv.ParseInt(d.buf.String(), 10, 64)
checkForIntParseError(err, start)
d.buf.Reset()
n, err := io.CopyN(&d.buf, d.r, length)
d.Offset += n
if err != nil {
checkForUnexpectedEOF(err, d.Offset)
panic(&SyntaxError{
Offset: d.Offset,
What: errors.New("unexpected I/O error: " + err.Error()),
})
}
s := d.buf.String()
d.buf.Reset()
return s
}
func (d *Decoder) parseDictInterface() interface{} {
dict := make(map[string]interface{})
for {
keyi, ok := d.parseValueInterface()
if !ok {
break
}
key, ok := keyi.(string)
if !ok {
panic(&SyntaxError{
Offset: d.Offset,
What: errors.New("non-string key in a dict"),
})
}
valuei, ok := d.parseValueInterface()
if !ok {
break
}
dict[key] = valuei
}
return dict
}
func (d *Decoder) parseListInterface() interface{} {
var list []interface{}
for {
valuei, ok := d.parseValueInterface()
if !ok {
break
}
list = append(list, valuei)
}
if list == nil {
list = make([]interface{}, 0, 0)
}
return list
}

250
vendor/github.com/anacrolix/torrent/bencode/encode.go generated vendored Normal file
View File

@@ -0,0 +1,250 @@
package bencode
import (
"io"
"math/big"
"reflect"
"runtime"
"sort"
"strconv"
"sync"
"github.com/anacrolix/missinggo"
)
func isEmptyValue(v reflect.Value) bool {
return missinggo.IsEmptyValue(v)
}
type Encoder struct {
w io.Writer
scratch [64]byte
}
func (e *Encoder) Encode(v interface{}) (err error) {
if v == nil {
return
}
defer func() {
if e := recover(); e != nil {
if _, ok := e.(runtime.Error); ok {
panic(e)
}
var ok bool
err, ok = e.(error)
if !ok {
panic(e)
}
}
}()
e.reflectValue(reflect.ValueOf(v))
return nil
}
type string_values []reflect.Value
func (sv string_values) Len() int { return len(sv) }
func (sv string_values) Swap(i, j int) { sv[i], sv[j] = sv[j], sv[i] }
func (sv string_values) Less(i, j int) bool { return sv.get(i) < sv.get(j) }
func (sv string_values) get(i int) string { return sv[i].String() }
func (e *Encoder) write(s []byte) {
_, err := e.w.Write(s)
if err != nil {
panic(err)
}
}
func (e *Encoder) writeString(s string) {
for s != "" {
n := copy(e.scratch[:], s)
s = s[n:]
e.write(e.scratch[:n])
}
}
func (e *Encoder) reflectString(s string) {
b := strconv.AppendInt(e.scratch[:0], int64(len(s)), 10)
e.write(b)
e.writeString(":")
e.writeString(s)
}
func (e *Encoder) reflectByteSlice(s []byte) {
b := strconv.AppendInt(e.scratch[:0], int64(len(s)), 10)
e.write(b)
e.writeString(":")
e.write(s)
}
// Returns true if the value implements Marshaler interface and marshaling was
// done successfully.
func (e *Encoder) reflectMarshaler(v reflect.Value) bool {
if !v.Type().Implements(marshalerType) {
if v.Kind() != reflect.Ptr && v.CanAddr() && v.Addr().Type().Implements(marshalerType) {
v = v.Addr()
} else {
return false
}
}
m := v.Interface().(Marshaler)
data, err := m.MarshalBencode()
if err != nil {
panic(&MarshalerError{v.Type(), err})
}
e.write(data)
return true
}
var bigIntType = reflect.TypeOf(big.Int{})
func (e *Encoder) reflectValue(v reflect.Value) {
if e.reflectMarshaler(v) {
return
}
if v.Type() == bigIntType {
e.writeString("i")
bi := v.Interface().(big.Int)
e.writeString(bi.String())
e.writeString("e")
return
}
switch v.Kind() {
case reflect.Bool:
if v.Bool() {
e.writeString("i1e")
} else {
e.writeString("i0e")
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
e.writeString("i")
b := strconv.AppendInt(e.scratch[:0], v.Int(), 10)
e.write(b)
e.writeString("e")
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
e.writeString("i")
b := strconv.AppendUint(e.scratch[:0], v.Uint(), 10)
e.write(b)
e.writeString("e")
case reflect.String:
e.reflectString(v.String())
case reflect.Struct:
e.writeString("d")
for _, ef := range encodeFields(v.Type()) {
field_value := v.Field(ef.i)
if ef.omit_empty && isEmptyValue(field_value) {
continue
}
e.reflectString(ef.tag)
e.reflectValue(field_value)
}
e.writeString("e")
case reflect.Map:
if v.Type().Key().Kind() != reflect.String {
panic(&MarshalTypeError{v.Type()})
}
if v.IsNil() {
e.writeString("de")
break
}
e.writeString("d")
sv := string_values(v.MapKeys())
sort.Sort(sv)
for _, key := range sv {
e.reflectString(key.String())
e.reflectValue(v.MapIndex(key))
}
e.writeString("e")
case reflect.Slice:
if v.IsNil() {
e.writeString("le")
break
}
if v.Type().Elem().Kind() == reflect.Uint8 {
s := v.Bytes()
e.reflectByteSlice(s)
break
}
fallthrough
case reflect.Array:
e.writeString("l")
for i, n := 0, v.Len(); i < n; i++ {
e.reflectValue(v.Index(i))
}
e.writeString("e")
case reflect.Interface:
e.reflectValue(v.Elem())
case reflect.Ptr:
if v.IsNil() {
v = reflect.Zero(v.Type().Elem())
} else {
v = v.Elem()
}
e.reflectValue(v)
default:
panic(&MarshalTypeError{v.Type()})
}
}
type encodeField struct {
i int
tag string
omit_empty bool
}
type encodeFieldsSortType []encodeField
func (ef encodeFieldsSortType) Len() int { return len(ef) }
func (ef encodeFieldsSortType) Swap(i, j int) { ef[i], ef[j] = ef[j], ef[i] }
func (ef encodeFieldsSortType) Less(i, j int) bool { return ef[i].tag < ef[j].tag }
var (
typeCacheLock sync.RWMutex
encodeFieldsCache = make(map[reflect.Type][]encodeField)
)
func encodeFields(t reflect.Type) []encodeField {
typeCacheLock.RLock()
fs, ok := encodeFieldsCache[t]
typeCacheLock.RUnlock()
if ok {
return fs
}
typeCacheLock.Lock()
defer typeCacheLock.Unlock()
fs, ok = encodeFieldsCache[t]
if ok {
return fs
}
for i, n := 0, t.NumField(); i < n; i++ {
f := t.Field(i)
if f.PkgPath != "" {
continue
}
if f.Anonymous {
continue
}
var ef encodeField
ef.i = i
ef.tag = f.Name
tv := getTag(f.Tag)
if tv.Ignore() {
continue
}
if tv.Key() != "" {
ef.tag = tv.Key()
}
ef.omit_empty = tv.OmitEmpty()
fs = append(fs, ef)
}
fss := encodeFieldsSortType(fs)
sort.Sort(fss)
encodeFieldsCache[t] = fs
return fs
}

29
vendor/github.com/anacrolix/torrent/bencode/fuzz.go generated vendored Normal file
View File

@@ -0,0 +1,29 @@
// +build gofuzz
package bencode
import (
"fmt"
"reflect"
)
func Fuzz(b []byte) int {
var d interface{}
err := Unmarshal(b, &d)
if err != nil {
return 0
}
b0, err := Marshal(d)
if err != nil {
panic(err)
}
var d0 interface{}
err = Unmarshal(b0, &d0)
if err != nil {
panic(err)
}
if !reflect.DeepEqual(d, d0) {
panic(fmt.Sprintf("%s != %s", d, d0))
}
return 1
}

28
vendor/github.com/anacrolix/torrent/bencode/misc.go generated vendored Normal file
View File

@@ -0,0 +1,28 @@
package bencode
import (
"reflect"
"unsafe"
)
// Wow Go is retarded.
var marshalerType = reflect.TypeOf(func() *Marshaler {
var m Marshaler
return &m
}()).Elem()
// Wow Go is retarded.
var unmarshalerType = reflect.TypeOf(func() *Unmarshaler {
var i Unmarshaler
return &i
}()).Elem()
func bytesAsString(b []byte) string {
if len(b) == 0 {
return ""
}
return *(*string)(unsafe.Pointer(&reflect.StringHeader{
uintptr(unsafe.Pointer(&b[0])),
len(b),
}))
}

38
vendor/github.com/anacrolix/torrent/bencode/scanner.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package bencode
import (
"errors"
"io"
)
// Implements io.ByteScanner over io.Reader, for use in Decoder, to ensure
// that as little as the undecoded input Reader is consumed as possible.
type scanner struct {
r io.Reader
b [1]byte // Buffer for ReadByte
unread bool // True if b has been unread, and so should be returned next
}
func (me *scanner) Read(b []byte) (int, error) {
return me.r.Read(b)
}
func (me *scanner) ReadByte() (byte, error) {
if me.unread {
me.unread = false
return me.b[0], nil
}
n, err := me.r.Read(me.b[:])
if n == 1 {
err = nil
}
return me.b[0], err
}
func (me *scanner) UnreadByte() error {
if me.unread {
return errors.New("byte already unread")
}
me.unread = true
return nil
}

41
vendor/github.com/anacrolix/torrent/bencode/tags.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
package bencode
import (
"reflect"
"strings"
)
func getTag(st reflect.StructTag) tag {
return parseTag(st.Get("bencode"))
}
type tag []string
func parseTag(tagStr string) tag {
return strings.Split(tagStr, ",")
}
func (me tag) Ignore() bool {
return me[0] == "-"
}
func (me tag) Key() string {
return me[0]
}
func (me tag) HasOpt(opt string) bool {
for _, s := range me[1:] {
if s == opt {
return true
}
}
return false
}
func (me tag) OmitEmpty() bool {
return me.HasOpt("omitempty")
}
func (me tag) IgnoreUnmarshalTypeError() bool {
return me.HasOpt("ignore_unmarshal_type_error")
}

90
vendor/github.com/anacrolix/torrent/bep40.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
package torrent
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"hash/crc32"
"net"
)
var table = crc32.MakeTable(crc32.Castagnoli)
type peerPriority = uint32
type ipPort struct {
IP net.IP
Port uint16
}
func sameSubnet(ones, bits int, a, b net.IP) bool {
mask := net.CIDRMask(ones, bits)
return a.Mask(mask).Equal(b.Mask(mask))
}
func ipv4Mask(a, b net.IP) net.IPMask {
if !sameSubnet(16, 32, a, b) {
return net.IPv4Mask(0xff, 0xff, 0x55, 0x55)
}
if !sameSubnet(24, 32, a, b) {
return net.IPv4Mask(0xff, 0xff, 0xff, 0x55)
}
return net.IPv4Mask(0xff, 0xff, 0xff, 0xff)
}
func mask(prefix, bytes int) net.IPMask {
ret := make(net.IPMask, bytes)
for i := range ret {
ret[i] = 0x55
}
for i := 0; i < prefix; i++ {
ret[i] = 0xff
}
return ret
}
func ipv6Mask(a, b net.IP) net.IPMask {
for i := 6; i <= 16; i++ {
if !sameSubnet(i*8, 128, a, b) {
return mask(i, 16)
}
}
panic(fmt.Sprintf("%s %s", a, b))
}
func bep40PriorityBytes(a, b ipPort) ([]byte, error) {
if a.IP.Equal(b.IP) {
var ret [4]byte
binary.BigEndian.PutUint16(ret[0:2], a.Port)
binary.BigEndian.PutUint16(ret[2:4], b.Port)
return ret[:], nil
}
if a4, b4 := a.IP.To4(), b.IP.To4(); a4 != nil && b4 != nil {
m := ipv4Mask(a.IP, b.IP)
return append(a4.Mask(m), b4.Mask(m)...), nil
}
if a6, b6 := a.IP.To16(), b.IP.To16(); a6 != nil && b6 != nil {
m := ipv6Mask(a.IP, b.IP)
return append(a6.Mask(m), b6.Mask(m)...), nil
}
return nil, errors.New("incomparable IPs")
}
func bep40Priority(a, b ipPort) (peerPriority, error) {
bs, err := bep40PriorityBytes(a, b)
if err != nil {
return 0, err
}
i := len(bs) / 2
_a, _b := bs[:i], bs[i:]
if bytes.Compare(_a, _b) > 0 {
bs = append(_b, _a...)
}
return crc32.Checksum(bs, table), nil
}
func bep40PriorityIgnoreError(a, b ipPort) peerPriority {
prio, _ := bep40Priority(a, b)
return prio
}

1335
vendor/github.com/anacrolix/torrent/client.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

162
vendor/github.com/anacrolix/torrent/config.go generated vendored Normal file
View File

@@ -0,0 +1,162 @@
package torrent
import (
"crypto/tls"
"net"
"net/http"
"time"
"github.com/anacrolix/dht"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/missinggo/expect"
"golang.org/x/time/rate"
"github.com/anacrolix/torrent/iplist"
"github.com/anacrolix/torrent/storage"
)
var DefaultHTTPUserAgent = "Go-Torrent/1.0"
// Probably not safe to modify this after it's given to a Client.
type ClientConfig struct {
// Store torrent file data in this directory unless .DefaultStorage is
// specified.
DataDir string `long:"data-dir" description:"directory to store downloaded torrent data"`
// The address to listen for new uTP and TCP bittorrent protocol
// connections. DHT shares a UDP socket with uTP unless configured
// otherwise.
ListenHost func(network string) string
ListenPort int
NoDefaultPortForwarding bool
// Don't announce to trackers. This only leaves DHT to discover peers.
DisableTrackers bool `long:"disable-trackers"`
DisablePEX bool `long:"disable-pex"`
// Don't create a DHT.
NoDHT bool `long:"disable-dht"`
DhtStartingNodes dht.StartingNodesGetter
// Never send chunks to peers.
NoUpload bool `long:"no-upload"`
// Disable uploading even when it isn't fair.
DisableAggressiveUpload bool `long:"disable-aggressive-upload"`
// Upload even after there's nothing in it for us. By default uploading is
// not altruistic, we'll only upload to encourage the peer to reciprocate.
Seed bool `long:"seed"`
// Only applies to chunks uploaded to peers, to maintain responsiveness
// communicating local Client state to peers. Each limiter token
// represents one byte. The Limiter's burst must be large enough to fit a
// whole chunk, which is usually 16 KiB (see TorrentSpec.ChunkSize).
UploadRateLimiter *rate.Limiter
// Rate limits all reads from connections to peers. Each limiter token
// represents one byte. The Limiter's burst must be bigger than the
// largest Read performed on a the underlying rate-limiting io.Reader
// minus one. This is likely to be the larger of the main read loop buffer
// (~4096), and the requested chunk size (~16KiB, see
// TorrentSpec.ChunkSize).
DownloadRateLimiter *rate.Limiter
// User-provided Client peer ID. If not present, one is generated automatically.
PeerID string
// For the bittorrent protocol.
DisableUTP bool
// For the bittorrent protocol.
DisableTCP bool `long:"disable-tcp"`
// Called to instantiate storage for each added torrent. Builtin backends
// are in the storage package. If not set, the "file" implementation is
// used.
DefaultStorage storage.ClientImpl
EncryptionPolicy
// Sets usage of Socks5 Proxy. Authentication should be included in the url if needed.
// Example of setting: "socks5://demo:demo@192.168.99.100:1080"
ProxyURL string
IPBlocklist iplist.Ranger
DisableIPv6 bool `long:"disable-ipv6"`
DisableIPv4 bool
DisableIPv4Peers bool
// Perform logging and any other behaviour that will help debug.
Debug bool `help:"enable debugging"`
// For querying HTTP trackers.
TrackerHttpClient *http.Client
// HTTPUserAgent changes default UserAgent for HTTP requests
HTTPUserAgent string
// Updated occasionally to when there's been some changes to client
// behaviour in case other clients are assuming anything of us. See also
// `bep20`.
ExtendedHandshakeClientVersion string // default "go.torrent dev 20150624"
// Peer ID client identifier prefix. We'll update this occasionally to
// reflect changes to client behaviour that other clients may depend on.
// Also see `extendedHandshakeClientVersion`.
Bep20 string // default "-GT0001-"
// Peer dial timeout to use when there are limited peers.
NominalDialTimeout time.Duration
// Minimum peer dial timeout to use (even if we have lots of peers).
MinDialTimeout time.Duration
EstablishedConnsPerTorrent int
HalfOpenConnsPerTorrent int
// Maximum number of peer addresses in reserve.
TorrentPeersHighWater int
// Minumum number of peers before effort is made to obtain more peers.
TorrentPeersLowWater int
// Limit how long handshake can take. This is to reduce the lingering
// impact of a few bad apples. 4s loses 1% of successful handshakes that
// are obtained with 60s timeout, and 5% of unsuccessful handshakes.
HandshakesTimeout time.Duration
// The IP addresses as our peers should see them. May differ from the
// local interfaces due to NAT or other network configurations.
PublicIp4 net.IP
PublicIp6 net.IP
DisableAcceptRateLimiting bool
// Don't add connections that have the same peer ID as an existing
// connection for a given Torrent.
dropDuplicatePeerIds bool
}
func (cfg *ClientConfig) SetListenAddr(addr string) *ClientConfig {
host, port, err := missinggo.ParseHostPort(addr)
expect.Nil(err)
cfg.ListenHost = func(string) string { return host }
cfg.ListenPort = port
return cfg
}
func NewDefaultClientConfig() *ClientConfig {
return &ClientConfig{
TrackerHttpClient: &http.Client{
Timeout: time.Second * 15,
Transport: &http.Transport{
Dial: (&net.Dialer{
Timeout: 15 * time.Second,
}).Dial,
TLSHandshakeTimeout: 15 * time.Second,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}},
HTTPUserAgent: DefaultHTTPUserAgent,
ExtendedHandshakeClientVersion: "go.torrent dev 20150624",
Bep20: "-GT0001-",
NominalDialTimeout: 20 * time.Second,
MinDialTimeout: 3 * time.Second,
EstablishedConnsPerTorrent: 50,
HalfOpenConnsPerTorrent: 25,
TorrentPeersHighWater: 500,
TorrentPeersLowWater: 50,
HandshakesTimeout: 4 * time.Second,
DhtStartingNodes: dht.GlobalBootstrapAddrs,
ListenHost: func(string) string { return "" },
UploadRateLimiter: unlimited,
DownloadRateLimiter: unlimited,
}
}
type EncryptionPolicy struct {
DisableEncryption bool
ForceEncryption bool // Don't allow unobfuscated connections.
PreferNoEncryption bool
}

117
vendor/github.com/anacrolix/torrent/conn_stats.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
package torrent
import (
"fmt"
"io"
"reflect"
"sync/atomic"
pp "github.com/anacrolix/torrent/peer_protocol"
)
// Various connection-level metrics. At the Torrent level these are
// aggregates. Chunks are messages with data payloads. Data is actual torrent
// content without any overhead. Useful is something we needed locally.
// Unwanted is something we didn't ask for (but may still be useful). Written
// is things sent to the peer, and Read is stuff received from them.
type ConnStats struct {
// Total bytes on the wire. Includes handshakes and encryption.
BytesWritten Count
BytesWrittenData Count
BytesRead Count
BytesReadData Count
BytesReadUsefulData Count
ChunksWritten Count
ChunksRead Count
ChunksReadUseful Count
ChunksReadWasted Count
MetadataChunksRead Count
// Number of pieces data was written to, that subsequently passed verification.
PiecesDirtiedGood Count
// Number of pieces data was written to, that subsequently failed
// verification. Note that a connection may not have been the sole dirtier
// of a piece.
PiecesDirtiedBad Count
}
func (me *ConnStats) Copy() (ret ConnStats) {
for i := 0; i < reflect.TypeOf(ConnStats{}).NumField(); i++ {
n := reflect.ValueOf(me).Elem().Field(i).Addr().Interface().(*Count).Int64()
reflect.ValueOf(&ret).Elem().Field(i).Addr().Interface().(*Count).Add(n)
}
return
}
type Count struct {
n int64
}
var _ fmt.Stringer = (*Count)(nil)
func (me *Count) Add(n int64) {
atomic.AddInt64(&me.n, n)
}
func (me *Count) Int64() int64 {
return atomic.LoadInt64(&me.n)
}
func (me *Count) String() string {
return fmt.Sprintf("%v", me.Int64())
}
func (cs *ConnStats) wroteMsg(msg *pp.Message) {
// TODO: Track messages and not just chunks.
switch msg.Type {
case pp.Piece:
cs.ChunksWritten.Add(1)
cs.BytesWrittenData.Add(int64(len(msg.Piece)))
}
}
func (cs *ConnStats) readMsg(msg *pp.Message) {
// We want to also handle extended metadata pieces here, but we wouldn't
// have decoded the extended payload yet.
switch msg.Type {
case pp.Piece:
cs.ChunksRead.Add(1)
cs.BytesReadData.Add(int64(len(msg.Piece)))
}
}
func (cs *ConnStats) incrementPiecesDirtiedGood() {
cs.PiecesDirtiedGood.Add(1)
}
func (cs *ConnStats) incrementPiecesDirtiedBad() {
cs.PiecesDirtiedBad.Add(1)
}
func add(n int64, f func(*ConnStats) *Count) func(*ConnStats) {
return func(cs *ConnStats) {
p := f(cs)
p.Add(n)
}
}
type connStatsReadWriter struct {
rw io.ReadWriter
c *connection
}
func (me connStatsReadWriter) Write(b []byte) (n int, err error) {
n, err = me.rw.Write(b)
me.c.wroteBytes(int64(n))
return
}
func (me connStatsReadWriter) Read(b []byte) (n int, err error) {
n, err = me.rw.Read(b)
me.c.readBytes(int64(n))
return
}

1559
vendor/github.com/anacrolix/torrent/connection.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

34
vendor/github.com/anacrolix/torrent/doc.go generated vendored Normal file
View File

@@ -0,0 +1,34 @@
/*
Package torrent implements a torrent client. Goals include:
* Configurable data storage, such as file, mmap, and piece-based.
* Downloading on demand: torrent.Reader will request only the data required to
satisfy Reads, which is ideal for streaming and torrentfs.
BitTorrent features implemented include:
* Protocol obfuscation
* DHT
* uTP
* PEX
* Magnet links
* IP Blocklists
* Some IPv6
* HTTP and UDP tracker clients
* BEPs:
- 3: Basic BitTorrent protocol
- 5: DHT
- 6: Fast Extension (have all/none only)
- 7: IPv6 Tracker Extension
- 9: ut_metadata
- 10: Extension protocol
- 11: PEX
- 12: Multitracker metadata extension
- 15: UDP Tracker Protocol
- 20: Peer ID convention ("-GTnnnn-")
- 23: Tracker Returns Compact Peer Lists
- 27: Private torrents
- 29: uTorrent transport protocol
- 41: UDP Tracker Protocol Extensions
- 42: DHT Security extension
- 43: Read-only DHT Nodes
*/
package torrent

145
vendor/github.com/anacrolix/torrent/file.go generated vendored Normal file
View File

@@ -0,0 +1,145 @@
package torrent
import (
"strings"
"github.com/anacrolix/torrent/metainfo"
)
// Provides access to regions of torrent data that correspond to its files.
type File struct {
t *Torrent
path string
offset int64
length int64
fi metainfo.FileInfo
prio piecePriority
}
func (f *File) Torrent() *Torrent {
return f.t
}
// Data for this file begins this many bytes into the Torrent.
func (f *File) Offset() int64 {
return f.offset
}
// The FileInfo from the metainfo.Info to which this file corresponds.
func (f File) FileInfo() metainfo.FileInfo {
return f.fi
}
// The file's path components joined by '/'.
func (f File) Path() string {
return f.path
}
// The file's length in bytes.
func (f *File) Length() int64 {
return f.length
}
// The relative file path for a multi-file torrent, and the torrent name for a
// single-file torrent.
func (f *File) DisplayPath() string {
fip := f.FileInfo().Path
if len(fip) == 0 {
return f.t.info.Name
}
return strings.Join(fip, "/")
}
// The download status of a piece that comprises part of a File.
type FilePieceState struct {
Bytes int64 // Bytes within the piece that are part of this File.
PieceState
}
// Returns the state of pieces in this file.
func (f *File) State() (ret []FilePieceState) {
f.t.cl.rLock()
defer f.t.cl.rUnlock()
pieceSize := int64(f.t.usualPieceSize())
off := f.offset % pieceSize
remaining := f.length
for i := pieceIndex(f.offset / pieceSize); ; i++ {
if remaining == 0 {
break
}
len1 := pieceSize - off
if len1 > remaining {
len1 = remaining
}
ps := f.t.pieceState(i)
ret = append(ret, FilePieceState{len1, ps})
off = 0
remaining -= len1
}
return
}
// Requests that all pieces containing data in the file be downloaded.
func (f *File) Download() {
f.SetPriority(PiecePriorityNormal)
}
func byteRegionExclusivePieces(off, size, pieceSize int64) (begin, end int) {
begin = int((off + pieceSize - 1) / pieceSize)
end = int((off + size) / pieceSize)
return
}
func (f *File) exclusivePieces() (begin, end int) {
return byteRegionExclusivePieces(f.offset, f.length, int64(f.t.usualPieceSize()))
}
// Deprecated: Use File.SetPriority.
func (f *File) Cancel() {
f.SetPriority(PiecePriorityNone)
}
func (f *File) NewReader() Reader {
tr := reader{
mu: f.t.cl.locker(),
t: f.t,
readahead: 5 * 1024 * 1024,
offset: f.Offset(),
length: f.Length(),
}
f.t.addReader(&tr)
return &tr
}
// Sets the minimum priority for pieces in the File.
func (f *File) SetPriority(prio piecePriority) {
f.t.cl.lock()
defer f.t.cl.unlock()
if prio == f.prio {
return
}
f.prio = prio
f.t.updatePiecePriorities(f.firstPieceIndex(), f.endPieceIndex())
}
// Returns the priority per File.SetPriority.
func (f *File) Priority() piecePriority {
f.t.cl.lock()
defer f.t.cl.unlock()
return f.prio
}
func (f *File) firstPieceIndex() pieceIndex {
if f.t.usualPieceSize() == 0 {
return 0
}
return pieceIndex(f.offset / int64(f.t.usualPieceSize()))
}
func (f *File) endPieceIndex() pieceIndex {
if f.t.usualPieceSize() == 0 {
return 0
}
return pieceIndex((f.offset+f.length-1)/int64(f.t.usualPieceSize())) + 1
}

53
vendor/github.com/anacrolix/torrent/global.go generated vendored Normal file
View File

@@ -0,0 +1,53 @@
package torrent
import (
"crypto"
"expvar"
pp "github.com/anacrolix/torrent/peer_protocol"
)
const (
pieceHash = crypto.SHA1
maxRequests = 250 // Maximum pending requests we allow peers to send us.
defaultChunkSize = 0x4000 // 16KiB
)
// These are our extended message IDs. Peers will use these values to
// select which extension a message is intended for.
const (
metadataExtendedId = iota + 1 // 0 is reserved for deleting keys
pexExtendedId
)
func defaultPeerExtensionBytes() PeerExtensionBits {
return pp.NewPeerExtensionBytes(pp.ExtensionBitDHT, pp.ExtensionBitExtended, pp.ExtensionBitFast)
}
// I could move a lot of these counters to their own file, but I suspect they
// may be attached to a Client someday.
var (
torrent = expvar.NewMap("torrent")
peersAddedBySource = expvar.NewMap("peersAddedBySource")
pieceHashedCorrect = expvar.NewInt("pieceHashedCorrect")
pieceHashedNotCorrect = expvar.NewInt("pieceHashedNotCorrect")
peerExtensions = expvar.NewMap("peerExtensions")
completedHandshakeConnectionFlags = expvar.NewMap("completedHandshakeConnectionFlags")
// Count of connections to peer with same client ID.
connsToSelf = expvar.NewInt("connsToSelf")
receivedKeepalives = expvar.NewInt("receivedKeepalives")
postedKeepalives = expvar.NewInt("postedKeepalives")
// Requests received for pieces we don't have.
requestsReceivedForMissingPieces = expvar.NewInt("requestsReceivedForMissingPieces")
requestedChunkLengths = expvar.NewMap("requestedChunkLengths")
messageTypesReceived = expvar.NewMap("messageTypesReceived")
// Track the effectiveness of Torrent.connPieceInclinationPool.
pieceInclinationsReused = expvar.NewInt("pieceInclinationsReused")
pieceInclinationsNew = expvar.NewInt("pieceInclinationsNew")
pieceInclinationsPut = expvar.NewInt("pieceInclinationsPut")
)

44
vendor/github.com/anacrolix/torrent/go.mod generated vendored Normal file
View File

@@ -0,0 +1,44 @@
module github.com/anacrolix/torrent
require (
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669
github.com/anacrolix/dht v0.0.0-20180412060941-24cbf25b72a4
github.com/anacrolix/envpprof v0.0.0-20180404065416-323002cec2fa
github.com/anacrolix/go-libutp v0.0.0-20180725071407-34b43d880940
github.com/anacrolix/log v0.0.0-20180412014343-2323884b361d
github.com/anacrolix/missinggo v0.0.0-20180725070939-60ef2fbf63df
github.com/anacrolix/sync v0.0.0-20180725074606-fda11526ff08
github.com/anacrolix/tagflag v0.0.0-20180109131632-2146c8d41bf0
github.com/anacrolix/utp v0.0.0-20180219060659-9e0e1d1d0572
github.com/boltdb/bolt v1.3.1
github.com/bradfitz/iter v0.0.0-20140124041915-454541ec3da2
github.com/davecgh/go-spew v1.1.0
github.com/dustin/go-humanize v0.0.0-20180421182945-02af3965c54e
github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712
github.com/elgatito/upnp v0.0.0-20180711183757-2f244d205f9a
github.com/fsnotify/fsnotify v1.4.7
github.com/google/btree v0.0.0-20180124185431-e89373fe6b4a
github.com/gopherjs/gopherjs v0.0.0-20180628210949-0892b62f0d9f // indirect
github.com/gosuri/uilive v0.0.0-20170323041506-ac356e6e42cd // indirect
github.com/gosuri/uiprogress v0.0.0-20170224063937-d0567a9d84a1
github.com/jessevdk/go-flags v1.4.0
github.com/jtolds/gls v4.2.1+incompatible // indirect
github.com/mattn/go-isatty v0.0.3 // indirect
github.com/mattn/go-sqlite3 v1.7.0
github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae // indirect
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 // indirect
github.com/pkg/errors v0.8.0
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/smartystreets/assertions v0.0.0-20180607162144-eb5b59917fa2 // indirect
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect
github.com/smartystreets/gunit v0.0.0-20180314194857-6f0d6275bdcd // indirect
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72 // indirect
github.com/stretchr/testify v1.2.1
github.com/willf/bitset v1.1.3 // indirect
github.com/willf/bloom v0.0.0-20170505221640-54e3b963ee16 // indirect
golang.org/x/net v0.0.0-20180724234803-3673e40ba225
golang.org/x/sys v0.0.0-20180724212812-e072cadbbdc8 // indirect
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2
)
replace github.com/glycerine/goconvey => github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a

74
vendor/github.com/anacrolix/torrent/handshake.go generated vendored Normal file
View File

@@ -0,0 +1,74 @@
package torrent
import (
"bytes"
"fmt"
"io"
"net"
"time"
"github.com/anacrolix/torrent/mse"
pp "github.com/anacrolix/torrent/peer_protocol"
)
// Wraps a raw connection and provides the interface we want for using the
// connection in the message loop.
type deadlineReader struct {
nc net.Conn
r io.Reader
}
func (r deadlineReader) Read(b []byte) (int, error) {
// Keep-alives should be received every 2 mins. Give a bit of gracetime.
err := r.nc.SetReadDeadline(time.Now().Add(150 * time.Second))
if err != nil {
return 0, fmt.Errorf("error setting read deadline: %s", err)
}
return r.r.Read(b)
}
func handleEncryption(
rw io.ReadWriter,
skeys mse.SecretKeyIter,
policy EncryptionPolicy,
) (
ret io.ReadWriter,
headerEncrypted bool,
cryptoMethod mse.CryptoMethod,
err error,
) {
if !policy.ForceEncryption {
var protocol [len(pp.Protocol)]byte
_, err = io.ReadFull(rw, protocol[:])
if err != nil {
return
}
rw = struct {
io.Reader
io.Writer
}{
io.MultiReader(bytes.NewReader(protocol[:]), rw),
rw,
}
if string(protocol[:]) == pp.Protocol {
ret = rw
return
}
}
headerEncrypted = true
ret, cryptoMethod, err = mse.ReceiveHandshake(rw, skeys, func(provides mse.CryptoMethod) mse.CryptoMethod {
switch {
case policy.ForceEncryption:
return mse.CryptoMethodRC4
case policy.DisableEncryption:
return mse.CryptoMethodPlaintext
case policy.PreferNoEncryption && provides&mse.CryptoMethodPlaintext != 0:
return mse.CryptoMethodPlaintext
default:
return mse.DefaultCryptoSelector(provides)
}
})
return
}
type PeerExtensionBits = pp.PeerExtensionBits

41
vendor/github.com/anacrolix/torrent/iplist/cidr.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
package iplist
import (
"bufio"
"io"
"net"
)
func ParseCIDRListReader(r io.Reader) (ret []Range, err error) {
s := bufio.NewScanner(r)
for s.Scan() {
err = func() (err error) {
_, in, err := net.ParseCIDR(s.Text())
if err != nil {
return
}
ret = append(ret, Range{
First: in.IP,
Last: IPNetLast(in),
})
return
}()
if err != nil {
return
}
}
return
}
// Returns the last, inclusive IP in a net.IPNet.
func IPNetLast(in *net.IPNet) (last net.IP) {
n := len(in.IP)
if n != len(in.Mask) {
panic("wat")
}
last = make(net.IP, n)
for i := 0; i < n; i++ {
last[i] = in.IP[i] | ^in.Mask[i]
}
return
}

185
vendor/github.com/anacrolix/torrent/iplist/iplist.go generated vendored Normal file
View File

@@ -0,0 +1,185 @@
// Package iplist handles the P2P Plaintext Format described by
// https://en.wikipedia.org/wiki/PeerGuardian#P2P_plaintext_format.
package iplist
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"net"
"sort"
)
// An abstraction of IP list implementations.
type Ranger interface {
// Return a Range containing the IP.
Lookup(net.IP) (r Range, ok bool)
// If your ranges hurt, use this.
NumRanges() int
}
type IPList struct {
ranges []Range
}
type Range struct {
First, Last net.IP
Description string
}
func (r Range) String() string {
return fmt.Sprintf("%s-%s: %s", r.First, r.Last, r.Description)
}
// Create a new IP list. The given ranges must already sorted by the lower
// bound IP in each range. Behaviour is undefined for lists of overlapping
// ranges.
func New(initSorted []Range) *IPList {
return &IPList{
ranges: initSorted,
}
}
func (ipl *IPList) NumRanges() int {
if ipl == nil {
return 0
}
return len(ipl.ranges)
}
// Return the range the given IP is in. ok if false if no range is found.
func (ipl *IPList) Lookup(ip net.IP) (r Range, ok bool) {
if ipl == nil {
return
}
// TODO: Perhaps all addresses should be converted to IPv6, if the future
// of IP is to always be backwards compatible. But this will cost 4x the
// memory for IPv4 addresses?
v4 := ip.To4()
if v4 != nil {
r, ok = ipl.lookup(v4)
if ok {
return
}
}
v6 := ip.To16()
if v6 != nil {
return ipl.lookup(v6)
}
if v4 == nil && v6 == nil {
r = Range{
Description: "bad IP",
}
ok = true
}
return
}
// Return a range that contains ip, or nil.
func lookup(
first func(i int) net.IP,
full func(i int) Range,
n int,
ip net.IP,
) (
r Range, ok bool,
) {
// Find the index of the first range for which the following range exceeds
// it.
i := sort.Search(n, func(i int) bool {
if i+1 >= n {
return true
}
return bytes.Compare(ip, first(i+1)) < 0
})
if i == n {
return
}
r = full(i)
ok = bytes.Compare(r.First, ip) <= 0 && bytes.Compare(ip, r.Last) <= 0
return
}
// Return the range the given IP is in. Returns nil if no range is found.
func (ipl *IPList) lookup(ip net.IP) (Range, bool) {
return lookup(func(i int) net.IP {
return ipl.ranges[i].First
}, func(i int) Range {
return ipl.ranges[i]
}, len(ipl.ranges), ip)
}
func minifyIP(ip *net.IP) {
v4 := ip.To4()
if v4 != nil {
*ip = append(make([]byte, 0, 4), v4...)
}
}
// Parse a line of the PeerGuardian Text Lists (P2P) Format. Returns !ok but
// no error if a line doesn't contain a range but isn't erroneous, such as
// comment and blank lines.
func ParseBlocklistP2PLine(l []byte) (r Range, ok bool, err error) {
l = bytes.TrimSpace(l)
if len(l) == 0 || bytes.HasPrefix(l, []byte("#")) {
return
}
// TODO: Check this when IPv6 blocklists are available.
colon := bytes.LastIndexAny(l, ":")
if colon == -1 {
err = errors.New("missing colon")
return
}
hyphen := bytes.IndexByte(l[colon+1:], '-')
if hyphen == -1 {
err = errors.New("missing hyphen")
return
}
hyphen += colon + 1
r.Description = string(l[:colon])
r.First = net.ParseIP(string(l[colon+1 : hyphen]))
minifyIP(&r.First)
r.Last = net.ParseIP(string(l[hyphen+1:]))
minifyIP(&r.Last)
if r.First == nil || r.Last == nil || len(r.First) != len(r.Last) {
err = errors.New("bad IP range")
return
}
ok = true
return
}
// Creates an IPList from a line-delimited P2P Plaintext file.
func NewFromReader(f io.Reader) (ret *IPList, err error) {
var ranges []Range
// There's a lot of similar descriptions, so we maintain a pool and reuse
// them to reduce memory overhead.
uniqStrs := make(map[string]string)
scanner := bufio.NewScanner(f)
lineNum := 1
for scanner.Scan() {
r, ok, lineErr := ParseBlocklistP2PLine(scanner.Bytes())
if lineErr != nil {
err = fmt.Errorf("error parsing line %d: %s", lineNum, lineErr)
return
}
lineNum++
if !ok {
continue
}
if s, ok := uniqStrs[r.Description]; ok {
r.Description = s
} else {
uniqStrs[r.Description] = r.Description
}
ranges = append(ranges, r)
}
err = scanner.Err()
if err != nil {
return
}
ret = New(ranges)
return
}

142
vendor/github.com/anacrolix/torrent/iplist/packed.go generated vendored Normal file
View File

@@ -0,0 +1,142 @@
package iplist
import (
"encoding/binary"
"fmt"
"io"
"net"
"os"
"github.com/edsrzf/mmap-go"
)
// The packed format is an 8 byte integer of the number of ranges. Then 20
// bytes per range, consisting of 4 byte packed IP being the lower bound IP of
// the range, then 4 bytes of the upper, inclusive bound, 8 bytes for the
// offset of the description from the end of the packed ranges, and 4 bytes
// for the length of the description. After these packed ranges, are the
// concatenated descriptions.
const (
packedRangesOffset = 8
packedRangeLen = 44
)
func (ipl *IPList) WritePacked(w io.Writer) (err error) {
descOffsets := make(map[string]int64, len(ipl.ranges))
descs := make([]string, 0, len(ipl.ranges))
var nextOffset int64
// This is a little monadic, no?
write := func(b []byte, expectedLen int) {
if err != nil {
return
}
var n int
n, err = w.Write(b)
if err != nil {
return
}
if n != expectedLen {
panic(n)
}
}
var b [8]byte
binary.LittleEndian.PutUint64(b[:], uint64(len(ipl.ranges)))
write(b[:], 8)
for _, r := range ipl.ranges {
write(r.First.To16(), 16)
write(r.Last.To16(), 16)
descOff, ok := descOffsets[r.Description]
if !ok {
descOff = nextOffset
descOffsets[r.Description] = descOff
descs = append(descs, r.Description)
nextOffset += int64(len(r.Description))
}
binary.LittleEndian.PutUint64(b[:], uint64(descOff))
write(b[:], 8)
binary.LittleEndian.PutUint32(b[:], uint32(len(r.Description)))
write(b[:4], 4)
}
for _, d := range descs {
write([]byte(d), len(d))
}
return
}
func NewFromPacked(b []byte) PackedIPList {
ret := PackedIPList(b)
minLen := packedRangesOffset + ret.len()*packedRangeLen
if len(b) < minLen {
panic(fmt.Sprintf("packed len %d < %d", len(b), minLen))
}
return ret
}
type PackedIPList []byte
var _ Ranger = PackedIPList{}
func (pil PackedIPList) len() int {
return int(binary.LittleEndian.Uint64(pil[:8]))
}
func (pil PackedIPList) NumRanges() int {
return pil.len()
}
func (pil PackedIPList) getFirst(i int) net.IP {
off := packedRangesOffset + packedRangeLen*i
return net.IP(pil[off : off+16])
}
func (pil PackedIPList) getRange(i int) (ret Range) {
rOff := packedRangesOffset + packedRangeLen*i
last := pil[rOff+16 : rOff+32]
descOff := int(binary.LittleEndian.Uint64(pil[rOff+32:]))
descLen := int(binary.LittleEndian.Uint32(pil[rOff+40:]))
descOff += packedRangesOffset + packedRangeLen*pil.len()
ret = Range{
pil.getFirst(i),
net.IP(last),
string(pil[descOff : descOff+descLen]),
}
return
}
func (pil PackedIPList) Lookup(ip net.IP) (r Range, ok bool) {
ip16 := ip.To16()
if ip16 == nil {
panic(ip)
}
return lookup(pil.getFirst, pil.getRange, pil.len(), ip16)
}
type closerFunc func() error
func (me closerFunc) Close() error {
return me()
}
func MMapPackedFile(filename string) (
ret interface {
Ranger
io.Closer
},
err error,
) {
f, err := os.Open(filename)
if err != nil {
return
}
defer f.Close()
mm, err := mmap.Map(f, mmap.RDONLY, 0)
if err != nil {
return
}
ret = struct {
Ranger
io.Closer
}{NewFromPacked(mm), closerFunc(mm.Unmap)}
return
}

27
vendor/github.com/anacrolix/torrent/listen.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
package torrent
import "strings"
type peerNetworks struct {
tcp4, tcp6 bool
utp4, utp6 bool
}
func handleErr(h func(), fs ...func() error) error {
for _, f := range fs {
err := f()
if err != nil {
h()
return err
}
}
return nil
}
func LoopbackListenHost(network string) string {
if strings.Contains(network, "4") {
return "127.0.0.1"
} else {
return "::1"
}
}

47
vendor/github.com/anacrolix/torrent/logonce/logonce.go generated vendored Normal file
View File

@@ -0,0 +1,47 @@
// Package logonce implements an io.Writer facade that only performs distinct
// writes. This can be used by log.Loggers as they're guaranteed to make a
// single Write method call for each message. This is useful for loggers that
// print useful information about unexpected conditions that aren't fatal in
// code.
package logonce
import (
"io"
"log"
"os"
)
// A default logger similar to the default logger in the log package.
var Stderr *log.Logger
func init() {
// This should emulate the default logger in the log package where
// possible. No time flag so that messages don't differ by time. Code
// debug information is useful.
Stderr = log.New(Writer(os.Stderr), "logonce: ", log.Lshortfile)
}
type writer struct {
w io.Writer
writes map[string]struct{}
}
func (w writer) Write(p []byte) (n int, err error) {
s := string(p)
if _, ok := w.writes[s]; ok {
return
}
n, err = w.w.Write(p)
if n != len(s) {
s = string(p[:n])
}
w.writes[s] = struct{}{}
return
}
func Writer(w io.Writer) io.Writer {
return writer{
w: w,
writes: make(map[string]struct{}),
}
}

1
vendor/github.com/anacrolix/torrent/metainfo/README generated vendored Normal file
View File

@@ -0,0 +1 @@
A library for manipulating ".torrent" files.

View File

@@ -0,0 +1,27 @@
package metainfo
type AnnounceList [][]string
// Whether the AnnounceList should be preferred over a single URL announce.
func (al AnnounceList) OverridesAnnounce(announce string) bool {
for _, tier := range al {
for _, url := range tier {
if url != "" || announce == "" {
return true
}
}
}
return false
}
func (al AnnounceList) DistinctValues() (ret map[string]struct{}) {
for _, tier := range al {
for _, v := range tier {
if ret == nil {
ret = make(map[string]struct{})
}
ret[v] = struct{}{}
}
}
return
}

View File

@@ -0,0 +1,27 @@
package metainfo
import "strings"
// Information specific to a single file inside the MetaInfo structure.
type FileInfo struct {
Length int64 `bencode:"length"`
Path []string `bencode:"path"`
}
func (fi *FileInfo) DisplayPath(info *Info) string {
if info.IsDir() {
return strings.Join(fi.Path, "/")
} else {
return info.Name
}
}
func (me FileInfo) Offset(info *Info) (ret int64) {
for _, fi := range info.UpvertedFiles() {
if me.DisplayPath(info) == fi.DisplayPath(info) {
return
}
ret += fi.Length
}
panic("not found")
}

58
vendor/github.com/anacrolix/torrent/metainfo/hash.go generated vendored Normal file
View File

@@ -0,0 +1,58 @@
package metainfo
import (
"crypto/sha1"
"encoding/hex"
"fmt"
)
const HashSize = 20
// 20-byte SHA1 hash used for info and pieces.
type Hash [HashSize]byte
func (h Hash) Bytes() []byte {
return h[:]
}
func (h Hash) AsString() string {
return string(h[:])
}
func (h Hash) String() string {
return h.HexString()
}
func (h Hash) HexString() string {
return fmt.Sprintf("%x", h[:])
}
func (h *Hash) FromHexString(s string) (err error) {
if len(s) != 2*HashSize {
err = fmt.Errorf("hash hex string has bad length: %d", len(s))
return
}
n, err := hex.Decode(h[:], []byte(s))
if err != nil {
return
}
if n != HashSize {
panic(n)
}
return
}
func NewHashFromHex(s string) (h Hash) {
err := h.FromHexString(s)
if err != nil {
panic(err)
}
return
}
func HashBytes(b []byte) (ret Hash) {
hasher := sha1.New()
hasher.Write(b)
copy(ret[:], hasher.Sum(nil))
return
}

156
vendor/github.com/anacrolix/torrent/metainfo/info.go generated vendored Normal file
View File

@@ -0,0 +1,156 @@
package metainfo
import (
"crypto/sha1"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/anacrolix/missinggo/slices"
)
// The info dictionary.
type Info struct {
PieceLength int64 `bencode:"piece length"`
Pieces []byte `bencode:"pieces"`
Name string `bencode:"name"`
Length int64 `bencode:"length,omitempty"`
Private *bool `bencode:"private,omitempty"`
// TODO: Document this field.
Source string `bencode:"source,omitempty"`
Files []FileInfo `bencode:"files,omitempty"`
}
// This is a helper that sets Files and Pieces from a root path and its
// children.
func (info *Info) BuildFromFilePath(root string) (err error) {
info.Name = filepath.Base(root)
info.Files = nil
err = filepath.Walk(root, func(path string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
if fi.IsDir() {
// Directories are implicit in torrent files.
return nil
} else if path == root {
// The root is a file.
info.Length = fi.Size()
return nil
}
relPath, err := filepath.Rel(root, path)
if err != nil {
return fmt.Errorf("error getting relative path: %s", err)
}
info.Files = append(info.Files, FileInfo{
Path: strings.Split(relPath, string(filepath.Separator)),
Length: fi.Size(),
})
return nil
})
if err != nil {
return
}
slices.Sort(info.Files, func(l, r FileInfo) bool {
return strings.Join(l.Path, "/") < strings.Join(r.Path, "/")
})
err = info.GeneratePieces(func(fi FileInfo) (io.ReadCloser, error) {
return os.Open(filepath.Join(root, strings.Join(fi.Path, string(filepath.Separator))))
})
if err != nil {
err = fmt.Errorf("error generating pieces: %s", err)
}
return
}
// Concatenates all the files in the torrent into w. open is a function that
// gets at the contents of the given file.
func (info *Info) writeFiles(w io.Writer, open func(fi FileInfo) (io.ReadCloser, error)) error {
for _, fi := range info.UpvertedFiles() {
r, err := open(fi)
if err != nil {
return fmt.Errorf("error opening %v: %s", fi, err)
}
wn, err := io.CopyN(w, r, fi.Length)
r.Close()
if wn != fi.Length {
return fmt.Errorf("error copying %v: %s", fi, err)
}
}
return nil
}
// Sets Pieces (the block of piece hashes in the Info) by using the passed
// function to get at the torrent data.
func (info *Info) GeneratePieces(open func(fi FileInfo) (io.ReadCloser, error)) error {
if info.PieceLength == 0 {
return errors.New("piece length must be non-zero")
}
pr, pw := io.Pipe()
go func() {
err := info.writeFiles(pw, open)
pw.CloseWithError(err)
}()
defer pr.Close()
var pieces []byte
for {
hasher := sha1.New()
wn, err := io.CopyN(hasher, pr, info.PieceLength)
if err == io.EOF {
err = nil
}
if err != nil {
return err
}
if wn == 0 {
break
}
pieces = hasher.Sum(pieces)
if wn < info.PieceLength {
break
}
}
info.Pieces = pieces
return nil
}
func (info *Info) TotalLength() (ret int64) {
if info.IsDir() {
for _, fi := range info.Files {
ret += fi.Length
}
} else {
ret = info.Length
}
return
}
func (info *Info) NumPieces() int {
return len(info.Pieces) / 20
}
func (info *Info) IsDir() bool {
return len(info.Files) != 0
}
// The files field, converted up from the old single-file in the parent info
// dict if necessary. This is a helper to avoid having to conditionally handle
// single and multi-file torrent infos.
func (info *Info) UpvertedFiles() []FileInfo {
if len(info.Files) == 0 {
return []FileInfo{{
Length: info.Length,
// Callers should determine that Info.Name is the basename, and
// thus a regular file.
Path: nil,
}}
}
return info.Files
}
func (info *Info) Piece(index int) Piece {
return Piece{info, pieceIndex(index)}
}

77
vendor/github.com/anacrolix/torrent/metainfo/magnet.go generated vendored Normal file
View File

@@ -0,0 +1,77 @@
package metainfo
import (
"encoding/base32"
"encoding/hex"
"fmt"
"net/url"
"strings"
)
// Magnet link components.
type Magnet struct {
InfoHash Hash
Trackers []string
DisplayName string
}
const xtPrefix = "urn:btih:"
func (m Magnet) String() string {
// net.URL likes to assume //, and encodes ':' on us, so we do most of
// this manually.
ret := "magnet:?xt="
ret += xtPrefix + hex.EncodeToString(m.InfoHash[:])
if m.DisplayName != "" {
ret += "&dn=" + url.QueryEscape(m.DisplayName)
}
for _, tr := range m.Trackers {
ret += "&tr=" + url.QueryEscape(tr)
}
return ret
}
// ParseMagnetURI parses Magnet-formatted URIs into a Magnet instance
func ParseMagnetURI(uri string) (m Magnet, err error) {
u, err := url.Parse(uri)
if err != nil {
err = fmt.Errorf("error parsing uri: %s", err)
return
}
if u.Scheme != "magnet" {
err = fmt.Errorf("unexpected scheme: %q", u.Scheme)
return
}
xt := u.Query().Get("xt")
if !strings.HasPrefix(xt, xtPrefix) {
err = fmt.Errorf("bad xt parameter")
return
}
infoHash := xt[len(xtPrefix):]
// BTIH hash can be in HEX or BASE32 encoding
// will assign appropriate func judging from symbol length
var decode func(dst, src []byte) (int, error)
switch len(infoHash) {
case 40:
decode = hex.Decode
case 32:
decode = base32.StdEncoding.Decode
}
if decode == nil {
err = fmt.Errorf("unhandled xt parameter encoding: encoded length %d", len(infoHash))
return
}
n, err := decode(m.InfoHash[:], []byte(infoHash))
if err != nil {
err = fmt.Errorf("error decoding xt: %s", err)
return
}
if n != 20 {
panic(n)
}
m.DisplayName = u.Query().Get("dn")
m.Trackers = u.Query()["tr"]
return
}

View File

@@ -0,0 +1,87 @@
package metainfo
import (
"io"
"os"
"time"
"github.com/anacrolix/torrent/bencode"
)
type MetaInfo struct {
InfoBytes bencode.Bytes `bencode:"info,omitempty"`
Announce string `bencode:"announce,omitempty"`
AnnounceList AnnounceList `bencode:"announce-list,omitempty"`
Nodes []Node `bencode:"nodes,omitempty"`
CreationDate int64 `bencode:"creation date,omitempty,ignore_unmarshal_type_error"`
Comment string `bencode:"comment,omitempty"`
CreatedBy string `bencode:"created by,omitempty"`
Encoding string `bencode:"encoding,omitempty"`
UrlList UrlList `bencode:"url-list,omitempty"`
}
// Load a MetaInfo from an io.Reader. Returns a non-nil error in case of
// failure.
func Load(r io.Reader) (*MetaInfo, error) {
var mi MetaInfo
d := bencode.NewDecoder(r)
err := d.Decode(&mi)
if err != nil {
return nil, err
}
return &mi, nil
}
// Convenience function for loading a MetaInfo from a file.
func LoadFromFile(filename string) (*MetaInfo, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
defer f.Close()
return Load(f)
}
func (mi MetaInfo) UnmarshalInfo() (info Info, err error) {
err = bencode.Unmarshal(mi.InfoBytes, &info)
return
}
func (mi MetaInfo) HashInfoBytes() (infoHash Hash) {
return HashBytes(mi.InfoBytes)
}
// Encode to bencoded form.
func (mi MetaInfo) Write(w io.Writer) error {
return bencode.NewEncoder(w).Encode(mi)
}
// Set good default values in preparation for creating a new MetaInfo file.
func (mi *MetaInfo) SetDefaults() {
mi.Comment = "yoloham"
mi.CreatedBy = "github.com/anacrolix/torrent"
mi.CreationDate = time.Now().Unix()
// mi.Info.PieceLength = 256 * 1024
}
// Creates a Magnet from a MetaInfo.
func (mi *MetaInfo) Magnet(displayName string, infoHash Hash) (m Magnet) {
for t := range mi.UpvertedAnnounceList().DistinctValues() {
m.Trackers = append(m.Trackers, t)
}
m.DisplayName = displayName
m.InfoHash = infoHash
return
}
// Returns the announce list converted from the old single announce field if
// necessary.
func (mi *MetaInfo) UpvertedAnnounceList() AnnounceList {
if mi.AnnounceList.OverridesAnnounce(mi.Announce) {
return mi.AnnounceList
}
if mi.Announce != "" {
return [][]string{[]string{mi.Announce}}
}
return nil
}

40
vendor/github.com/anacrolix/torrent/metainfo/nodes.go generated vendored Normal file
View File

@@ -0,0 +1,40 @@
package metainfo
import (
"fmt"
"net"
"strconv"
"github.com/anacrolix/torrent/bencode"
)
type Node string
var (
_ bencode.Unmarshaler = new(Node)
)
func (n *Node) UnmarshalBencode(b []byte) (err error) {
var iface interface{}
err = bencode.Unmarshal(b, &iface)
if err != nil {
return
}
switch v := iface.(type) {
case string:
*n = Node(v)
case []interface{}:
func() {
defer func() {
r := recover()
if r != nil {
err = r.(error)
}
}()
*n = Node(net.JoinHostPort(v[0].(string), strconv.FormatInt(v[1].(int64), 10)))
}()
default:
err = fmt.Errorf("unsupported type: %T", iface)
}
return
}

32
vendor/github.com/anacrolix/torrent/metainfo/piece.go generated vendored Normal file
View File

@@ -0,0 +1,32 @@
package metainfo
import (
"github.com/anacrolix/missinggo"
)
type Piece struct {
Info *Info
i pieceIndex
}
type pieceIndex = int
func (p Piece) Length() int64 {
if int(p.i) == p.Info.NumPieces()-1 {
return p.Info.TotalLength() - int64(p.i)*p.Info.PieceLength
}
return p.Info.PieceLength
}
func (p Piece) Offset() int64 {
return int64(p.i) * p.Info.PieceLength
}
func (p Piece) Hash() (ret Hash) {
missinggo.CopyExact(&ret, p.Info.Pieces[p.i*HashSize:(p.i+1)*HashSize])
return
}
func (p Piece) Index() pieceIndex {
return p.i
}

View File

@@ -0,0 +1,7 @@
package metainfo
// Uniquely identifies a piece.
type PieceKey struct {
InfoHash Hash
Index pieceIndex
}

View File

@@ -0,0 +1,27 @@
package metainfo
import (
"github.com/anacrolix/torrent/bencode"
)
type UrlList []string
var (
_ bencode.Unmarshaler = (*UrlList)(nil)
)
func (me *UrlList) UnmarshalBencode(b []byte) error {
if len(b) == 0 {
return nil
}
if b[0] == 'l' {
var l []string
err := bencode.Unmarshal(b, &l)
*me = l
return err
}
var s string
err := bencode.Unmarshal(b, &s)
*me = []string{s}
return err
}

159
vendor/github.com/anacrolix/torrent/misc.go generated vendored Normal file
View File

@@ -0,0 +1,159 @@
package torrent
import (
"errors"
"net"
"github.com/anacrolix/missinggo"
"golang.org/x/time/rate"
"github.com/anacrolix/torrent/metainfo"
pp "github.com/anacrolix/torrent/peer_protocol"
)
type chunkSpec struct {
Begin, Length pp.Integer
}
type request struct {
Index pp.Integer
chunkSpec
}
func (r request) ToMsg(mt pp.MessageType) pp.Message {
return pp.Message{
Type: mt,
Index: r.Index,
Begin: r.Begin,
Length: r.Length,
}
}
func newRequest(index, begin, length pp.Integer) request {
return request{index, chunkSpec{begin, length}}
}
func newRequestFromMessage(msg *pp.Message) request {
switch msg.Type {
case pp.Request, pp.Cancel, pp.Reject:
return newRequest(msg.Index, msg.Begin, msg.Length)
case pp.Piece:
return newRequest(msg.Index, msg.Begin, pp.Integer(len(msg.Piece)))
default:
panic(msg.Type)
}
}
// The size in bytes of a metadata extension piece.
func metadataPieceSize(totalSize int, piece int) int {
ret := totalSize - piece*(1<<14)
if ret > 1<<14 {
ret = 1 << 14
}
return ret
}
// Return the request that would include the given offset into the torrent data.
func torrentOffsetRequest(torrentLength, pieceSize, chunkSize, offset int64) (
r request, ok bool) {
if offset < 0 || offset >= torrentLength {
return
}
r.Index = pp.Integer(offset / pieceSize)
r.Begin = pp.Integer(offset % pieceSize / chunkSize * chunkSize)
r.Length = pp.Integer(chunkSize)
pieceLeft := pp.Integer(pieceSize - int64(r.Begin))
if r.Length > pieceLeft {
r.Length = pieceLeft
}
torrentLeft := torrentLength - int64(r.Index)*pieceSize - int64(r.Begin)
if int64(r.Length) > torrentLeft {
r.Length = pp.Integer(torrentLeft)
}
ok = true
return
}
func torrentRequestOffset(torrentLength, pieceSize int64, r request) (off int64) {
off = int64(r.Index)*pieceSize + int64(r.Begin)
if off < 0 || off >= torrentLength {
panic("invalid request")
}
return
}
func validateInfo(info *metainfo.Info) error {
if len(info.Pieces)%20 != 0 {
return errors.New("pieces has invalid length")
}
if info.PieceLength == 0 {
if info.TotalLength() != 0 {
return errors.New("zero piece length")
}
} else {
if int((info.TotalLength()+info.PieceLength-1)/info.PieceLength) != info.NumPieces() {
return errors.New("piece count and file lengths are at odds")
}
}
return nil
}
func chunkIndexSpec(index pp.Integer, pieceLength, chunkSize pp.Integer) chunkSpec {
ret := chunkSpec{pp.Integer(index) * chunkSize, chunkSize}
if ret.Begin+ret.Length > pieceLength {
ret.Length = pieceLength - ret.Begin
}
return ret
}
func connLessTrusted(l, r *connection) bool {
return l.netGoodPiecesDirtied() < r.netGoodPiecesDirtied()
}
func connIsIpv6(nc interface {
LocalAddr() net.Addr
}) bool {
ra := nc.LocalAddr()
rip := missinggo.AddrIP(ra)
return rip.To4() == nil && rip.To16() != nil
}
func clamp(min, value, max int64) int64 {
if min > max {
panic("harumph")
}
if value < min {
value = min
}
if value > max {
value = max
}
return value
}
func max(as ...int64) int64 {
ret := as[0]
for _, a := range as[1:] {
if a > ret {
ret = a
}
}
return ret
}
func min(as ...int64) int64 {
ret := as[0]
for _, a := range as[1:] {
if a < ret {
ret = a
}
}
return ret
}
var unlimited = rate.NewLimiter(rate.Inf, 0)
type (
pieceIndex = int
InfoHash = metainfo.Hash
)

View File

@@ -0,0 +1,82 @@
package mmap_span
import (
"io"
"log"
"sync"
"github.com/edsrzf/mmap-go"
)
type segment struct {
*mmap.MMap
}
func (s segment) Size() int64 {
return int64(len(*s.MMap))
}
type MMapSpan struct {
mu sync.RWMutex
span
}
func (ms *MMapSpan) Append(mmap mmap.MMap) {
ms.span = append(ms.span, segment{&mmap})
}
func (ms *MMapSpan) Close() error {
ms.mu.Lock()
defer ms.mu.Unlock()
for _, mMap := range ms.span {
err := mMap.(segment).Unmap()
if err != nil {
log.Print(err)
}
}
return nil
}
func (ms *MMapSpan) Size() (ret int64) {
ms.mu.RLock()
defer ms.mu.RUnlock()
for _, seg := range ms.span {
ret += seg.Size()
}
return
}
func (ms *MMapSpan) ReadAt(p []byte, off int64) (n int, err error) {
ms.mu.RLock()
defer ms.mu.RUnlock()
ms.ApplyTo(off, func(intervalOffset int64, interval sizer) (stop bool) {
_n := copy(p, (*interval.(segment).MMap)[intervalOffset:])
p = p[_n:]
n += _n
return len(p) == 0
})
if len(p) != 0 {
err = io.EOF
}
return
}
func (ms *MMapSpan) WriteAt(p []byte, off int64) (n int, err error) {
ms.mu.RLock()
defer ms.mu.RUnlock()
ms.ApplyTo(off, func(iOff int64, i sizer) (stop bool) {
mMap := i.(segment)
_n := copy((*mMap.MMap)[iOff:], p)
// err = mMap.Sync(gommap.MS_ASYNC)
// if err != nil {
// return true
// }
p = p[_n:]
n += _n
return len(p) == 0
})
if err != nil && len(p) != 0 {
err = io.ErrShortWrite
}
return
}

21
vendor/github.com/anacrolix/torrent/mmap_span/span.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
package mmap_span
type sizer interface {
Size() int64
}
type span []sizer
func (s span) ApplyTo(off int64, f func(int64, sizer) (stop bool)) {
for _, interval := range s {
iSize := interval.Size()
if off >= iSize {
off -= iSize
} else {
if f(off, interval) {
return
}
off = 0
}
}
}

566
vendor/github.com/anacrolix/torrent/mse/mse.go generated vendored Normal file
View File

@@ -0,0 +1,566 @@
// https://wiki.vuze.com/w/Message_Stream_Encryption
package mse
import (
"bytes"
"crypto/rand"
"crypto/rc4"
"crypto/sha1"
"encoding/binary"
"errors"
"expvar"
"fmt"
"io"
"io/ioutil"
"math"
"math/big"
"strconv"
"sync"
"github.com/anacrolix/missinggo/perf"
"github.com/bradfitz/iter"
)
const (
maxPadLen = 512
CryptoMethodPlaintext CryptoMethod = 1
CryptoMethodRC4 CryptoMethod = 2
AllSupportedCrypto = CryptoMethodPlaintext | CryptoMethodRC4
)
type CryptoMethod uint32
var (
// Prime P according to the spec, and G, the generator.
p, g big.Int
// The rand.Int max arg for use in newPadLen()
newPadLenMax big.Int
// For use in initer's hashes
req1 = []byte("req1")
req2 = []byte("req2")
req3 = []byte("req3")
// Verification constant "VC" which is all zeroes in the bittorrent
// implementation.
vc [8]byte
// Zero padding
zeroPad [512]byte
// Tracks counts of received crypto_provides
cryptoProvidesCount = expvar.NewMap("mseCryptoProvides")
)
func init() {
p.SetString("0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A63A36210000000000090563", 0)
g.SetInt64(2)
newPadLenMax.SetInt64(maxPadLen + 1)
}
func hash(parts ...[]byte) []byte {
h := sha1.New()
for _, p := range parts {
n, err := h.Write(p)
if err != nil {
panic(err)
}
if n != len(p) {
panic(n)
}
}
return h.Sum(nil)
}
func newEncrypt(initer bool, s []byte, skey []byte) (c *rc4.Cipher) {
c, err := rc4.NewCipher(hash([]byte(func() string {
if initer {
return "keyA"
} else {
return "keyB"
}
}()), s, skey))
if err != nil {
panic(err)
}
var burnSrc, burnDst [1024]byte
c.XORKeyStream(burnDst[:], burnSrc[:])
return
}
type cipherReader struct {
c *rc4.Cipher
r io.Reader
mu sync.Mutex
be []byte
}
func (cr *cipherReader) Read(b []byte) (n int, err error) {
var be []byte
cr.mu.Lock()
if len(cr.be) >= len(b) {
be = cr.be
cr.be = nil
cr.mu.Unlock()
} else {
cr.mu.Unlock()
be = make([]byte, len(b))
}
n, err = cr.r.Read(be[:len(b)])
cr.c.XORKeyStream(b[:n], be[:n])
cr.mu.Lock()
if len(be) > len(cr.be) {
cr.be = be
}
cr.mu.Unlock()
return
}
func newCipherReader(c *rc4.Cipher, r io.Reader) io.Reader {
return &cipherReader{c: c, r: r}
}
type cipherWriter struct {
c *rc4.Cipher
w io.Writer
b []byte
}
func (cr *cipherWriter) Write(b []byte) (n int, err error) {
be := func() []byte {
if len(cr.b) < len(b) {
return make([]byte, len(b))
} else {
ret := cr.b
cr.b = nil
return ret
}
}()
cr.c.XORKeyStream(be[:], b)
n, err = cr.w.Write(be[:len(b)])
if n != len(b) {
// The cipher will have advanced beyond the callers stream position.
// We can't use the cipher anymore.
cr.c = nil
}
if len(be) > len(cr.b) {
cr.b = be
}
return
}
func newX() big.Int {
var X big.Int
X.SetBytes(func() []byte {
var b [20]byte
_, err := rand.Read(b[:])
if err != nil {
panic(err)
}
return b[:]
}())
return X
}
func paddedLeft(b []byte, _len int) []byte {
if len(b) == _len {
return b
}
ret := make([]byte, _len)
if n := copy(ret[_len-len(b):], b); n != len(b) {
panic(n)
}
return ret
}
// Calculate, and send Y, our public key.
func (h *handshake) postY(x *big.Int) error {
var y big.Int
y.Exp(&g, x, &p)
return h.postWrite(paddedLeft(y.Bytes(), 96))
}
func (h *handshake) establishS() error {
x := newX()
h.postY(&x)
var b [96]byte
_, err := io.ReadFull(h.conn, b[:])
if err != nil {
return fmt.Errorf("error reading Y: %s", err)
}
var Y, S big.Int
Y.SetBytes(b[:])
S.Exp(&Y, &x, &p)
sBytes := S.Bytes()
copy(h.s[96-len(sBytes):96], sBytes)
return nil
}
func newPadLen() int64 {
i, err := rand.Int(rand.Reader, &newPadLenMax)
if err != nil {
panic(err)
}
ret := i.Int64()
if ret < 0 || ret > maxPadLen {
panic(ret)
}
return ret
}
// Manages state for both initiating and receiving handshakes.
type handshake struct {
conn io.ReadWriter
s [96]byte
initer bool // Whether we're initiating or receiving.
skeys SecretKeyIter // Skeys we'll accept if receiving.
skey []byte // Skey we're initiating with.
ia []byte // Initial payload. Only used by the initiator.
// Return the bit for the crypto method the receiver wants to use.
chooseMethod CryptoSelector
// Sent to the receiver.
cryptoProvides CryptoMethod
writeMu sync.Mutex
writes [][]byte
writeErr error
writeCond sync.Cond
writeClose bool
writerMu sync.Mutex
writerCond sync.Cond
writerDone bool
}
func (h *handshake) finishWriting() {
h.writeMu.Lock()
h.writeClose = true
h.writeCond.Broadcast()
h.writeMu.Unlock()
h.writerMu.Lock()
for !h.writerDone {
h.writerCond.Wait()
}
h.writerMu.Unlock()
}
func (h *handshake) writer() {
defer func() {
h.writerMu.Lock()
h.writerDone = true
h.writerCond.Broadcast()
h.writerMu.Unlock()
}()
for {
h.writeMu.Lock()
for {
if len(h.writes) != 0 {
break
}
if h.writeClose {
h.writeMu.Unlock()
return
}
h.writeCond.Wait()
}
b := h.writes[0]
h.writes = h.writes[1:]
h.writeMu.Unlock()
_, err := h.conn.Write(b)
if err != nil {
h.writeMu.Lock()
h.writeErr = err
h.writeMu.Unlock()
return
}
}
}
func (h *handshake) postWrite(b []byte) error {
h.writeMu.Lock()
defer h.writeMu.Unlock()
if h.writeErr != nil {
return h.writeErr
}
h.writes = append(h.writes, b)
h.writeCond.Signal()
return nil
}
func xor(dst, src []byte) (ret []byte) {
max := len(dst)
if max > len(src) {
max = len(src)
}
ret = make([]byte, 0, max)
for i := range iter.N(max) {
ret = append(ret, dst[i]^src[i])
}
return
}
func marshal(w io.Writer, data ...interface{}) (err error) {
for _, data := range data {
err = binary.Write(w, binary.BigEndian, data)
if err != nil {
break
}
}
return
}
func unmarshal(r io.Reader, data ...interface{}) (err error) {
for _, data := range data {
err = binary.Read(r, binary.BigEndian, data)
if err != nil {
break
}
}
return
}
// Looking for b at the end of a.
func suffixMatchLen(a, b []byte) int {
if len(b) > len(a) {
b = b[:len(a)]
}
// i is how much of b to try to match
for i := len(b); i > 0; i-- {
// j is how many chars we've compared
j := 0
for ; j < i; j++ {
if b[i-1-j] != a[len(a)-1-j] {
goto shorter
}
}
return j
shorter:
}
return 0
}
// Reads from r until b has been seen. Keeps the minimum amount of data in
// memory.
func readUntil(r io.Reader, b []byte) error {
b1 := make([]byte, len(b))
i := 0
for {
_, err := io.ReadFull(r, b1[i:])
if err != nil {
return err
}
i = suffixMatchLen(b1, b)
if i == len(b) {
break
}
if copy(b1, b1[len(b1)-i:]) != i {
panic("wat")
}
}
return nil
}
type readWriter struct {
io.Reader
io.Writer
}
func (h *handshake) newEncrypt(initer bool) *rc4.Cipher {
return newEncrypt(initer, h.s[:], h.skey)
}
func (h *handshake) initerSteps() (ret io.ReadWriter, selected CryptoMethod, err error) {
h.postWrite(hash(req1, h.s[:]))
h.postWrite(xor(hash(req2, h.skey), hash(req3, h.s[:])))
buf := &bytes.Buffer{}
padLen := uint16(newPadLen())
if len(h.ia) > math.MaxUint16 {
err = errors.New("initial payload too large")
return
}
err = marshal(buf, vc[:], h.cryptoProvides, padLen, zeroPad[:padLen], uint16(len(h.ia)), h.ia)
if err != nil {
return
}
e := h.newEncrypt(true)
be := make([]byte, buf.Len())
e.XORKeyStream(be, buf.Bytes())
h.postWrite(be)
bC := h.newEncrypt(false)
var eVC [8]byte
bC.XORKeyStream(eVC[:], vc[:])
// Read until the all zero VC. At this point we've only read the 96 byte
// public key, Y. There is potentially 512 byte padding, between us and
// the 8 byte verification constant.
err = readUntil(io.LimitReader(h.conn, 520), eVC[:])
if err != nil {
if err == io.EOF {
err = errors.New("failed to synchronize on VC")
} else {
err = fmt.Errorf("error reading until VC: %s", err)
}
return
}
r := newCipherReader(bC, h.conn)
var method CryptoMethod
err = unmarshal(r, &method, &padLen)
if err != nil {
return
}
_, err = io.CopyN(ioutil.Discard, r, int64(padLen))
if err != nil {
return
}
selected = method & h.cryptoProvides
switch selected {
case CryptoMethodRC4:
ret = readWriter{r, &cipherWriter{e, h.conn, nil}}
case CryptoMethodPlaintext:
ret = h.conn
default:
err = fmt.Errorf("receiver chose unsupported method: %x", method)
}
return
}
var ErrNoSecretKeyMatch = errors.New("no skey matched")
func (h *handshake) receiverSteps() (ret io.ReadWriter, chosen CryptoMethod, err error) {
// There is up to 512 bytes of padding, then the 20 byte hash.
err = readUntil(io.LimitReader(h.conn, 532), hash(req1, h.s[:]))
if err != nil {
if err == io.EOF {
err = errors.New("failed to synchronize on S hash")
}
return
}
var b [20]byte
_, err = io.ReadFull(h.conn, b[:])
if err != nil {
return
}
err = ErrNoSecretKeyMatch
h.skeys(func(skey []byte) bool {
if bytes.Equal(xor(hash(req2, skey), hash(req3, h.s[:])), b[:]) {
h.skey = skey
err = nil
return false
}
return true
})
if err != nil {
return
}
r := newCipherReader(newEncrypt(true, h.s[:], h.skey), h.conn)
var (
vc [8]byte
provides CryptoMethod
padLen uint16
)
err = unmarshal(r, vc[:], &provides, &padLen)
if err != nil {
return
}
cryptoProvidesCount.Add(strconv.FormatUint(uint64(provides), 16), 1)
chosen = h.chooseMethod(provides)
_, err = io.CopyN(ioutil.Discard, r, int64(padLen))
if err != nil {
return
}
var lenIA uint16
unmarshal(r, &lenIA)
if lenIA != 0 {
h.ia = make([]byte, lenIA)
unmarshal(r, h.ia)
}
buf := &bytes.Buffer{}
w := cipherWriter{h.newEncrypt(false), buf, nil}
padLen = uint16(newPadLen())
err = marshal(&w, &vc, uint32(chosen), padLen, zeroPad[:padLen])
if err != nil {
return
}
err = h.postWrite(buf.Bytes())
if err != nil {
return
}
switch chosen {
case CryptoMethodRC4:
ret = readWriter{
io.MultiReader(bytes.NewReader(h.ia), r),
&cipherWriter{w.c, h.conn, nil},
}
case CryptoMethodPlaintext:
ret = readWriter{
io.MultiReader(bytes.NewReader(h.ia), h.conn),
h.conn,
}
default:
err = errors.New("chosen crypto method is not supported")
}
return
}
func (h *handshake) Do() (ret io.ReadWriter, method CryptoMethod, err error) {
h.writeCond.L = &h.writeMu
h.writerCond.L = &h.writerMu
go h.writer()
defer func() {
h.finishWriting()
if err == nil {
err = h.writeErr
}
}()
err = h.establishS()
if err != nil {
err = fmt.Errorf("error while establishing secret: %s", err)
return
}
pad := make([]byte, newPadLen())
io.ReadFull(rand.Reader, pad)
err = h.postWrite(pad)
if err != nil {
return
}
if h.initer {
ret, method, err = h.initerSteps()
} else {
ret, method, err = h.receiverSteps()
}
return
}
func InitiateHandshake(rw io.ReadWriter, skey []byte, initialPayload []byte, cryptoProvides CryptoMethod) (ret io.ReadWriter, method CryptoMethod, err error) {
h := handshake{
conn: rw,
initer: true,
skey: skey,
ia: initialPayload,
cryptoProvides: cryptoProvides,
}
defer perf.ScopeTimerErr(&err)()
return h.Do()
}
func ReceiveHandshake(rw io.ReadWriter, skeys SecretKeyIter, selectCrypto CryptoSelector) (ret io.ReadWriter, method CryptoMethod, err error) {
h := handshake{
conn: rw,
initer: false,
skeys: skeys,
chooseMethod: selectCrypto,
}
return h.Do()
}
// A function that given a function, calls it with secret keys until it
// returns false or exhausted.
type SecretKeyIter func(callback func(skey []byte) (more bool))
func DefaultCryptoSelector(provided CryptoMethod) CryptoMethod {
if provided&CryptoMethodPlaintext != 0 {
return CryptoMethodPlaintext
}
return CryptoMethodRC4
}
type CryptoSelector func(CryptoMethod) CryptoMethod

47
vendor/github.com/anacrolix/torrent/multiless.go generated vendored Normal file
View File

@@ -0,0 +1,47 @@
package torrent
func strictCmp(same, less bool) cmper {
return func() (bool, bool) { return same, less }
}
type (
cmper func() (same, less bool)
multiLess struct {
ok bool
less bool
}
)
func (me *multiLess) Final() bool {
if !me.ok {
panic("undetermined")
}
return me.less
}
func (me *multiLess) FinalOk() (left, ok bool) {
return me.less, me.ok
}
func (me *multiLess) Next(f cmper) {
if me.ok {
return
}
same, less := f()
if same {
return
}
me.ok = true
me.less = less
}
func (me *multiLess) StrictNext(same, less bool) {
if me.ok {
return
}
me.Next(func() (bool, bool) { return same, less })
}
func (me *multiLess) NextBool(l, r bool) {
me.StrictNext(l == r, l)
}

View File

@@ -0,0 +1,22 @@
package peer_protocol
import (
"net"
"github.com/anacrolix/torrent/bencode"
)
// Marshals to the smallest compact byte representation.
type CompactIp net.IP
var _ bencode.Marshaler = CompactIp{}
func (me CompactIp) MarshalBencode() ([]byte, error) {
return bencode.Marshal(func() []byte {
if ip4 := net.IP(me).To4(); ip4 != nil {
return ip4
} else {
return me
}
}())
}

View File

@@ -0,0 +1,124 @@
package peer_protocol
import (
"bufio"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
"sync"
"github.com/pkg/errors"
)
type Decoder struct {
R *bufio.Reader
Pool *sync.Pool
MaxLength Integer // TODO: Should this include the length header or not?
}
// io.EOF is returned if the source terminates cleanly on a message boundary.
// TODO: Is that before or after the message?
func (d *Decoder) Decode(msg *Message) (err error) {
var length Integer
err = binary.Read(d.R, binary.BigEndian, &length)
if err != nil {
if err != io.EOF {
err = fmt.Errorf("error reading message length: %s", err)
}
return
}
if length > d.MaxLength {
return errors.New("message too long")
}
if length == 0 {
msg.Keepalive = true
return
}
msg.Keepalive = false
r := &io.LimitedReader{R: d.R, N: int64(length)}
// Check that all of r was utilized.
defer func() {
if err != nil {
return
}
if r.N != 0 {
err = fmt.Errorf("%d bytes unused in message type %d", r.N, msg.Type)
}
}()
msg.Keepalive = false
c, err := readByte(r)
if err != nil {
return
}
msg.Type = MessageType(c)
switch msg.Type {
case Choke, Unchoke, Interested, NotInterested, HaveAll, HaveNone:
return
case Have, AllowedFast, Suggest:
err = msg.Index.Read(r)
case Request, Cancel, Reject:
for _, data := range []*Integer{&msg.Index, &msg.Begin, &msg.Length} {
err = data.Read(r)
if err != nil {
break
}
}
case Bitfield:
b := make([]byte, length-1)
_, err = io.ReadFull(r, b)
msg.Bitfield = unmarshalBitfield(b)
case Piece:
for _, pi := range []*Integer{&msg.Index, &msg.Begin} {
err := pi.Read(r)
if err != nil {
return err
}
}
dataLen := r.N
msg.Piece = (*d.Pool.Get().(*[]byte))
if int64(cap(msg.Piece)) < dataLen {
return errors.New("piece data longer than expected")
}
msg.Piece = msg.Piece[:dataLen]
_, err := io.ReadFull(r, msg.Piece)
if err != nil {
return errors.Wrap(err, "reading piece data")
}
case Extended:
b, err := readByte(r)
if err != nil {
break
}
msg.ExtendedID = ExtensionNumber(b)
msg.ExtendedPayload, err = ioutil.ReadAll(r)
case Port:
err = binary.Read(r, binary.BigEndian, &msg.Port)
default:
err = fmt.Errorf("unknown message type %#v", c)
}
return
}
func readByte(r io.Reader) (b byte, err error) {
var arr [1]byte
n, err := r.Read(arr[:])
b = arr[0]
if n == 1 {
err = nil
return
}
if err == nil {
panic(err)
}
return
}
func unmarshalBitfield(b []byte) (bf []bool) {
for _, c := range b {
for i := 7; i >= 0; i-- {
bf = append(bf, (c>>uint(i))&1 == 1)
}
}
return
}

View File

@@ -0,0 +1,32 @@
package peer_protocol
import "net"
// http://www.bittorrent.org/beps/bep_0010.html
type (
ExtendedHandshakeMessage struct {
M map[ExtensionName]ExtensionNumber `bencode:"m"`
V string `bencode:"v,omitempty"`
Reqq int `bencode:"reqq,omitempty"`
Encryption bool `bencode:"e,omitempty"`
// BEP 9
MetadataSize int `bencode:"metadata_size,omitempty"`
// The local client port. It would be redundant for the receiving side of
// a connection to send this.
Port int `bencode:"p,omitempty"`
YourIp CompactIp `bencode:"yourip,omitempty"`
Ipv4 CompactIp `bencode:"ipv4,omitempty"`
Ipv6 net.IP `bencode:"ipv6,omitempty"`
}
ExtensionName string
ExtensionNumber int
)
const (
// http://www.bittorrent.org/beps/bep_0011.html
ExtensionNamePex ExtensionName = "ut_pex"
// http://bittorrent.org/beps/bep_0009.html. Note that there's an
// LT_metadata, but I've never implemented it.
ExtensionNameMetadata = "ut_metadata"
)

View File

@@ -0,0 +1,138 @@
package peer_protocol
import (
"encoding/hex"
"fmt"
"io"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/torrent/metainfo"
)
type ExtensionBit uint
const (
ExtensionBitDHT = 0 // http://www.bittorrent.org/beps/bep_0005.html
ExtensionBitExtended = 20 // http://www.bittorrent.org/beps/bep_0010.html
ExtensionBitFast = 2 // http://www.bittorrent.org/beps/bep_0006.html
)
func handshakeWriter(w io.Writer, bb <-chan []byte, done chan<- error) {
var err error
for b := range bb {
_, err = w.Write(b)
if err != nil {
break
}
}
done <- err
}
type (
PeerExtensionBits [8]byte
)
func (me PeerExtensionBits) String() string {
return hex.EncodeToString(me[:])
}
func NewPeerExtensionBytes(bits ...ExtensionBit) (ret PeerExtensionBits) {
for _, b := range bits {
ret.SetBit(b)
}
return
}
func (pex PeerExtensionBits) SupportsExtended() bool {
return pex.GetBit(ExtensionBitExtended)
}
func (pex PeerExtensionBits) SupportsDHT() bool {
return pex.GetBit(ExtensionBitDHT)
}
func (pex PeerExtensionBits) SupportsFast() bool {
return pex.GetBit(ExtensionBitFast)
}
func (pex *PeerExtensionBits) SetBit(bit ExtensionBit) {
pex[7-bit/8] |= 1 << (bit % 8)
}
func (pex PeerExtensionBits) GetBit(bit ExtensionBit) bool {
return pex[7-bit/8]&(1<<(bit%8)) != 0
}
type HandshakeResult struct {
PeerExtensionBits
PeerID [20]byte
metainfo.Hash
}
// ih is nil if we expect the peer to declare the InfoHash, such as when the
// peer initiated the connection. Returns ok if the Handshake was successful,
// and err if there was an unexpected condition other than the peer simply
// abandoning the Handshake.
func Handshake(sock io.ReadWriter, ih *metainfo.Hash, peerID [20]byte, extensions PeerExtensionBits) (res HandshakeResult, ok bool, err error) {
// Bytes to be sent to the peer. Should never block the sender.
postCh := make(chan []byte, 4)
// A single error value sent when the writer completes.
writeDone := make(chan error, 1)
// Performs writes to the socket and ensures posts don't block.
go handshakeWriter(sock, postCh, writeDone)
defer func() {
close(postCh) // Done writing.
if !ok {
return
}
if err != nil {
panic(err)
}
// Wait until writes complete before returning from handshake.
err = <-writeDone
if err != nil {
err = fmt.Errorf("error writing: %s", err)
}
}()
post := func(bb []byte) {
select {
case postCh <- bb:
default:
panic("mustn't block while posting")
}
}
post([]byte(Protocol))
post(extensions[:])
if ih != nil { // We already know what we want.
post(ih[:])
post(peerID[:])
}
var b [68]byte
_, err = io.ReadFull(sock, b[:68])
if err != nil {
err = nil
return
}
if string(b[:20]) != Protocol {
return
}
missinggo.CopyExact(&res.PeerExtensionBits, b[20:28])
missinggo.CopyExact(&res.Hash, b[28:48])
missinggo.CopyExact(&res.PeerID, b[48:68])
// peerExtensions.Add(res.PeerExtensionBits.String(), 1)
// TODO: Maybe we can just drop peers here if we're not interested. This
// could prevent them trying to reconnect, falsely believing there was
// just a problem.
if ih == nil { // We were waiting for the peer to tell us what they wanted.
post(res.Hash[:])
post(peerID[:])
}
ok = true
return
}

View File

@@ -0,0 +1,25 @@
package peer_protocol
import (
"encoding/binary"
"io"
)
type Integer uint32
func (i *Integer) Read(r io.Reader) error {
return binary.Read(r, binary.BigEndian, i)
}
// It's perfectly fine to cast these to an int. TODO: Or is it?
func (i Integer) Int() int {
return int(i)
}
func (i Integer) Uint64() uint64 {
return uint64(i)
}
func (i Integer) Uint32() uint32 {
return uint32(i)
}

View File

@@ -0,0 +1,30 @@
// Code generated by "stringer -type=MessageType"; DO NOT EDIT.
package peer_protocol
import "strconv"
const (
_MessageType_name_0 = "ChokeUnchokeInterestedNotInterestedHaveBitfieldRequestPieceCancelPort"
_MessageType_name_1 = "SuggestHaveAllHaveNoneRejectAllowedFast"
_MessageType_name_2 = "Extended"
)
var (
_MessageType_index_0 = [...]uint8{0, 5, 12, 22, 35, 39, 47, 54, 59, 65, 69}
_MessageType_index_1 = [...]uint8{0, 7, 14, 22, 28, 39}
)
func (i MessageType) String() string {
switch {
case 0 <= i && i <= 9:
return _MessageType_name_0[_MessageType_index_0[i]:_MessageType_index_0[i+1]]
case 13 <= i && i <= 17:
i -= 13
return _MessageType_name_1[_MessageType_index_1[i]:_MessageType_index_1[i+1]]
case i == 20:
return _MessageType_name_2
default:
return "MessageType(" + strconv.FormatInt(int64(i), 10) + ")"
}
}

View File

@@ -0,0 +1,116 @@
package peer_protocol
import (
"bytes"
"encoding/binary"
"fmt"
)
type Message struct {
Keepalive bool
Type MessageType
Index, Begin, Length Integer
Piece []byte
Bitfield []bool
ExtendedID ExtensionNumber
ExtendedPayload []byte
Port uint16
}
func MakeCancelMessage(piece, offset, length Integer) Message {
return Message{
Type: Cancel,
Index: piece,
Begin: offset,
Length: length,
}
}
func (msg Message) RequestSpec() (ret RequestSpec) {
return RequestSpec{
msg.Index,
msg.Begin,
func() Integer {
if msg.Type == Piece {
return Integer(len(msg.Piece))
} else {
return msg.Length
}
}(),
}
}
func (msg Message) MustMarshalBinary() []byte {
b, err := msg.MarshalBinary()
if err != nil {
panic(err)
}
return b
}
func (msg Message) MarshalBinary() (data []byte, err error) {
buf := &bytes.Buffer{}
if !msg.Keepalive {
err = buf.WriteByte(byte(msg.Type))
if err != nil {
return
}
switch msg.Type {
case Choke, Unchoke, Interested, NotInterested, HaveAll, HaveNone:
case Have:
err = binary.Write(buf, binary.BigEndian, msg.Index)
case Request, Cancel, Reject:
for _, i := range []Integer{msg.Index, msg.Begin, msg.Length} {
err = binary.Write(buf, binary.BigEndian, i)
if err != nil {
break
}
}
case Bitfield:
_, err = buf.Write(marshalBitfield(msg.Bitfield))
case Piece:
for _, i := range []Integer{msg.Index, msg.Begin} {
err = binary.Write(buf, binary.BigEndian, i)
if err != nil {
return
}
}
n, err := buf.Write(msg.Piece)
if err != nil {
break
}
if n != len(msg.Piece) {
panic(n)
}
case Extended:
err = buf.WriteByte(byte(msg.ExtendedID))
if err != nil {
return
}
_, err = buf.Write(msg.ExtendedPayload)
case Port:
err = binary.Write(buf, binary.BigEndian, msg.Port)
default:
err = fmt.Errorf("unknown message type: %v", msg.Type)
}
}
data = make([]byte, 4+buf.Len())
binary.BigEndian.PutUint32(data, uint32(buf.Len()))
if buf.Len() != copy(data[4:], buf.Bytes()) {
panic("bad copy")
}
return
}
func marshalBitfield(bf []bool) (b []byte) {
b = make([]byte, (len(bf)+7)/8)
for i, have := range bf {
if !have {
continue
}
c := b[i/8]
c |= 1 << uint(7-i%8)
b[i/8] = c
}
return
}

View File

@@ -0,0 +1,26 @@
package peer_protocol
import "github.com/anacrolix/dht/krpc"
type PexMsg struct {
Added krpc.CompactIPv4NodeAddrs `bencode:"added"`
AddedFlags []PexPeerFlags `bencode:"added.f"`
Added6 krpc.CompactIPv6NodeAddrs `bencode:"added6"`
Added6Flags []PexPeerFlags `bencode:"added6.f"`
Dropped krpc.CompactIPv4NodeAddrs `bencode:"dropped"`
Dropped6 krpc.CompactIPv6NodeAddrs `bencode:"dropped6"`
}
type PexPeerFlags byte
func (me PexPeerFlags) Get(f PexPeerFlags) bool {
return me&f == f
}
const (
PexPrefersEncryption = 0x01
PexSeedUploadOnly = 0x02
PexSupportsUtp = 0x04
PexHolepunchSupport = 0x08
PexOutgoingConn = 0x10
)

View File

@@ -0,0 +1,45 @@
package peer_protocol
const (
Protocol = "\x13BitTorrent protocol"
)
type MessageType byte
//go:generate stringer -type=MessageType
func (mt MessageType) FastExtension() bool {
return mt >= Suggest && mt <= AllowedFast
}
const (
// BEP 3
Choke MessageType = 0
Unchoke MessageType = 1
Interested MessageType = 2
NotInterested MessageType = 3
Have MessageType = 4
Bitfield MessageType = 5
Request MessageType = 6
Piece MessageType = 7
Cancel MessageType = 8
Port MessageType = 9
// BEP 6 - Fast extension
Suggest MessageType = 0x0d // 13
HaveAll MessageType = 0x0e // 14
HaveNone MessageType = 0x0f // 15
Reject MessageType = 0x10 // 16
AllowedFast MessageType = 0x11 // 17
// BEP 10
Extended MessageType = 20
)
const (
HandshakeExtendedID = 0
RequestMetadataExtensionMsgType = 0
DataMetadataExtensionMsgType = 1
RejectMetadataExtensionMsgType = 2
)

View File

@@ -0,0 +1,11 @@
package peer_protocol
import "fmt"
type RequestSpec struct {
Index, Begin, Length Integer
}
func (me RequestSpec) String() string {
return fmt.Sprintf("{%d %d %d}", me.Index, me.Begin, me.Length)
}

14
vendor/github.com/anacrolix/torrent/peerid.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
package torrent
// Peer client ID.
type PeerID [20]byte
// // Pretty prints the ID as hex, except parts that adher to the Peer ID
// // Conventions of BEP 20.
// func (me PeerID) String() string {
// // if me[0] == '-' && me[7] == '-' {
// // return string(me[:8]) + hex.EncodeToString(me[8:])
// // }
// // return hex.EncodeToString(me[:])
// return fmt.Sprintf("%+q", me[:])
// }

241
vendor/github.com/anacrolix/torrent/piece.go generated vendored Normal file
View File

@@ -0,0 +1,241 @@
package torrent
import (
"fmt"
"sync"
"github.com/anacrolix/missinggo/bitmap"
"github.com/anacrolix/torrent/metainfo"
pp "github.com/anacrolix/torrent/peer_protocol"
"github.com/anacrolix/torrent/storage"
)
// Describes the importance of obtaining a particular piece.
type piecePriority byte
func (pp *piecePriority) Raise(maybe piecePriority) bool {
if maybe > *pp {
*pp = maybe
return true
}
return false
}
// Priority for use in PriorityBitmap
func (me piecePriority) BitmapPriority() int {
return -int(me)
}
const (
PiecePriorityNone piecePriority = iota // Not wanted. Must be the zero value.
PiecePriorityNormal // Wanted.
PiecePriorityHigh // Wanted a lot.
PiecePriorityReadahead // May be required soon.
// Succeeds a piece where a read occurred. Currently the same as Now,
// apparently due to issues with caching.
PiecePriorityNext
PiecePriorityNow // A Reader is reading in this piece. Highest urgency.
)
type Piece struct {
// The completed piece SHA1 hash, from the metainfo "pieces" field.
hash metainfo.Hash
t *Torrent
index pieceIndex
files []*File
// Chunks we've written to since the last check. The chunk offset and
// length can be determined by the request chunkSize in use.
dirtyChunks bitmap.Bitmap
hashing bool
numVerifies int64
storageCompletionOk bool
publicPieceState PieceState
priority piecePriority
pendingWritesMutex sync.Mutex
pendingWrites int
noPendingWrites sync.Cond
// Connections that have written data to this piece since its last check.
// This can include connections that have closed.
dirtiers map[*connection]struct{}
}
func (p *Piece) String() string {
return fmt.Sprintf("%s/%d", p.t.infoHash.HexString(), p.index)
}
func (p *Piece) Info() metainfo.Piece {
return p.t.info.Piece(int(p.index))
}
func (p *Piece) Storage() storage.Piece {
return p.t.storage.Piece(p.Info())
}
func (p *Piece) pendingChunkIndex(chunkIndex int) bool {
return !p.dirtyChunks.Contains(chunkIndex)
}
func (p *Piece) pendingChunk(cs chunkSpec, chunkSize pp.Integer) bool {
return p.pendingChunkIndex(chunkIndex(cs, chunkSize))
}
func (p *Piece) hasDirtyChunks() bool {
return p.dirtyChunks.Len() != 0
}
func (p *Piece) numDirtyChunks() pp.Integer {
return pp.Integer(p.dirtyChunks.Len())
}
func (p *Piece) unpendChunkIndex(i int) {
p.dirtyChunks.Add(i)
p.t.tickleReaders()
}
func (p *Piece) pendChunkIndex(i int) {
p.dirtyChunks.Remove(i)
}
func (p *Piece) numChunks() pp.Integer {
return p.t.pieceNumChunks(p.index)
}
func (p *Piece) undirtiedChunkIndices() (ret bitmap.Bitmap) {
ret = p.dirtyChunks.Copy()
ret.FlipRange(0, bitmap.BitIndex(p.numChunks()))
return
}
func (p *Piece) incrementPendingWrites() {
p.pendingWritesMutex.Lock()
p.pendingWrites++
p.pendingWritesMutex.Unlock()
}
func (p *Piece) decrementPendingWrites() {
p.pendingWritesMutex.Lock()
if p.pendingWrites == 0 {
panic("assertion")
}
p.pendingWrites--
if p.pendingWrites == 0 {
p.noPendingWrites.Broadcast()
}
p.pendingWritesMutex.Unlock()
}
func (p *Piece) waitNoPendingWrites() {
p.pendingWritesMutex.Lock()
for p.pendingWrites != 0 {
p.noPendingWrites.Wait()
}
p.pendingWritesMutex.Unlock()
}
func (p *Piece) chunkIndexDirty(chunk pp.Integer) bool {
return p.dirtyChunks.Contains(bitmap.BitIndex(chunk))
}
func (p *Piece) chunkIndexSpec(chunk pp.Integer) chunkSpec {
return chunkIndexSpec(chunk, p.length(), p.chunkSize())
}
func (p *Piece) numDirtyBytes() (ret pp.Integer) {
// defer func() {
// if ret > p.length() {
// panic("too many dirty bytes")
// }
// }()
numRegularDirtyChunks := p.numDirtyChunks()
if p.chunkIndexDirty(p.numChunks() - 1) {
numRegularDirtyChunks--
ret += p.chunkIndexSpec(p.lastChunkIndex()).Length
}
ret += pp.Integer(numRegularDirtyChunks) * p.chunkSize()
return
}
func (p *Piece) length() pp.Integer {
return p.t.pieceLength(p.index)
}
func (p *Piece) chunkSize() pp.Integer {
return p.t.chunkSize
}
func (p *Piece) lastChunkIndex() pp.Integer {
return p.numChunks() - 1
}
func (p *Piece) bytesLeft() (ret pp.Integer) {
if p.t.pieceComplete(p.index) {
return 0
}
return p.length() - p.numDirtyBytes()
}
func (p *Piece) VerifyData() {
p.t.cl.lock()
defer p.t.cl.unlock()
target := p.numVerifies + 1
if p.hashing {
target++
}
// log.Printf("target: %d", target)
p.t.queuePieceCheck(p.index)
for p.numVerifies < target {
// log.Printf("got %d verifies", p.numVerifies)
p.t.cl.event.Wait()
}
// log.Print("done")
}
func (p *Piece) queuedForHash() bool {
return p.t.piecesQueuedForHash.Get(bitmap.BitIndex(p.index))
}
func (p *Piece) torrentBeginOffset() int64 {
return int64(p.index) * p.t.info.PieceLength
}
func (p *Piece) torrentEndOffset() int64 {
return p.torrentBeginOffset() + int64(p.length())
}
func (p *Piece) SetPriority(prio piecePriority) {
p.t.cl.lock()
defer p.t.cl.unlock()
p.priority = prio
p.t.updatePiecePriority(p.index)
}
func (p *Piece) uncachedPriority() (ret piecePriority) {
if p.t.pieceComplete(p.index) || p.t.pieceQueuedForHash(p.index) || p.t.hashingPiece(p.index) {
return PiecePriorityNone
}
for _, f := range p.files {
ret.Raise(f.prio)
}
if p.t.readerNowPieces.Contains(int(p.index)) {
ret.Raise(PiecePriorityNow)
}
// if t.readerNowPieces.Contains(piece - 1) {
// return PiecePriorityNext
// }
if p.t.readerReadaheadPieces.Contains(bitmap.BitIndex(p.index)) {
ret.Raise(PiecePriorityReadahead)
}
ret.Raise(p.priority)
return
}
func (p *Piece) completion() (ret storage.Completion) {
ret.Complete = p.t.pieceComplete(p.index)
ret.Ok = p.storageCompletionOk
return
}

21
vendor/github.com/anacrolix/torrent/piecestate.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
package torrent
import (
"github.com/anacrolix/torrent/storage"
)
// The current state of a piece.
type PieceState struct {
Priority piecePriority
storage.Completion
// The piece is being hashed, or is queued for hash.
Checking bool
// Some of the piece has been obtained.
Partial bool
}
// Represents a series of consecutive pieces with the same state.
type PieceStateRun struct {
PieceState
Length int // How many consecutive pieces have this state.
}

39
vendor/github.com/anacrolix/torrent/portfwd.go generated vendored Normal file
View File

@@ -0,0 +1,39 @@
package torrent
import (
"log"
"time"
flog "github.com/anacrolix/log"
"github.com/elgatito/upnp"
)
func addPortMapping(d upnp.Device, proto upnp.Protocol, internalPort int, debug bool) {
externalPort, err := d.AddPortMapping(proto, internalPort, internalPort, "anacrolix/torrent", 0)
if err != nil {
log.Printf("error adding %s port mapping: %s", proto, err)
} else if externalPort != internalPort {
log.Printf("external port %d does not match internal port %d in port mapping", externalPort, internalPort)
} else if debug {
log.Printf("forwarded external %s port %d", proto, externalPort)
}
}
func (cl *Client) forwardPort() {
cl.lock()
defer cl.unlock()
if cl.config.NoDefaultPortForwarding {
return
}
cl.unlock()
ds := upnp.Discover(0, 2*time.Second)
cl.lock()
flog.Default.Handle(flog.Fmsg("discovered %d upnp devices", len(ds)))
port := cl.incomingPeerPort()
cl.unlock()
for _, d := range ds {
go addPortMapping(d, upnp.TCP, port, cl.config.Debug)
go addPortMapping(d, upnp.UDP, port, cl.config.Debug)
}
cl.lock()
}

View File

@@ -0,0 +1,49 @@
package torrent
import "github.com/google/btree"
// Peers are stored with their priority at insertion. Their priority may
// change if our apparent IP changes, we don't currently handle that.
type prioritizedPeersItem struct {
prio peerPriority
p Peer
}
func (me prioritizedPeersItem) Less(than btree.Item) bool {
return me.prio < than.(prioritizedPeersItem).prio
}
type prioritizedPeers struct {
om *btree.BTree
getPrio func(Peer) peerPriority
}
func (me *prioritizedPeers) Each(f func(Peer)) {
me.om.Ascend(func(i btree.Item) bool {
f(i.(prioritizedPeersItem).p)
return true
})
}
func (me *prioritizedPeers) Len() int {
return me.om.Len()
}
// Returns true if a peer is replaced.
func (me *prioritizedPeers) Add(p Peer) bool {
return me.om.ReplaceOrInsert(prioritizedPeersItem{me.getPrio(p), p}) != nil
}
func (me *prioritizedPeers) DeleteMin() (ret prioritizedPeersItem, ok bool) {
i := me.om.DeleteMin()
if i == nil {
return
}
ret = i.(prioritizedPeersItem)
ok = true
return
}
func (me *prioritizedPeers) PopMax() Peer {
return me.om.DeleteMax().(prioritizedPeersItem).p
}

9
vendor/github.com/anacrolix/torrent/protocol.go generated vendored Normal file
View File

@@ -0,0 +1,9 @@
package torrent
import (
pp "github.com/anacrolix/torrent/peer_protocol"
)
func makeCancelMessage(r request) pp.Message {
return pp.MakeCancelMessage(r.Index, r.Begin, r.Length)
}

53
vendor/github.com/anacrolix/torrent/ratelimitreader.go generated vendored Normal file
View File

@@ -0,0 +1,53 @@
package torrent
import (
"context"
"fmt"
"io"
"time"
"golang.org/x/time/rate"
)
type rateLimitedReader struct {
l *rate.Limiter
r io.Reader
// This is the time of the last Read's reservation.
lastRead time.Time
}
func (me *rateLimitedReader) Read(b []byte) (n int, err error) {
const oldStyle = false // Retained for future reference.
if oldStyle {
// Wait until we can read at all.
if err := me.l.WaitN(context.Background(), 1); err != nil {
panic(err)
}
// Limit the read to within the burst.
if me.l.Limit() != rate.Inf && len(b) > me.l.Burst() {
b = b[:me.l.Burst()]
}
n, err = me.r.Read(b)
// Pay the piper.
now := time.Now()
me.lastRead = now
if !me.l.ReserveN(now, n-1).OK() {
panic(fmt.Sprintf("burst exceeded?: %d", n-1))
}
} else {
// Limit the read to within the burst.
if me.l.Limit() != rate.Inf && len(b) > me.l.Burst() {
b = b[:me.l.Burst()]
}
n, err = me.r.Read(b)
now := time.Now()
r := me.l.ReserveN(now, n)
if !r.OK() {
panic(n)
}
me.lastRead = now
time.Sleep(r.Delay())
}
return
}

273
vendor/github.com/anacrolix/torrent/reader.go generated vendored Normal file
View File

@@ -0,0 +1,273 @@
package torrent
import (
"context"
"errors"
"io"
"log"
"sync"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/torrent/peer_protocol"
)
type Reader interface {
io.Reader
io.Seeker
io.Closer
missinggo.ReadContexter
SetReadahead(int64)
SetResponsive()
}
// Piece range by piece index, [begin, end).
type pieceRange struct {
begin, end pieceIndex
}
// Accesses Torrent data via a Client. Reads block until the data is
// available. Seeks and readahead also drive Client behaviour.
type reader struct {
t *Torrent
responsive bool
// Adjust the read/seek window to handle Readers locked to File extents
// and the like.
offset, length int64
// Ensure operations that change the position are exclusive, like Read()
// and Seek().
opMu sync.Mutex
// Required when modifying pos and readahead, or reading them without
// opMu.
mu sync.Locker
pos int64
readahead int64
// The cached piece range this reader wants downloaded. The zero value
// corresponds to nothing. We cache this so that changes can be detected,
// and bubbled up to the Torrent only as required.
pieces pieceRange
}
var _ io.ReadCloser = &reader{}
// Don't wait for pieces to complete and be verified. Read calls return as
// soon as they can when the underlying chunks become available.
func (r *reader) SetResponsive() {
r.responsive = true
r.t.cl.event.Broadcast()
}
// Disable responsive mode. TODO: Remove?
func (r *reader) SetNonResponsive() {
r.responsive = false
r.t.cl.event.Broadcast()
}
// Configure the number of bytes ahead of a read that should also be
// prioritized in preparation for further reads.
func (r *reader) SetReadahead(readahead int64) {
r.mu.Lock()
r.readahead = readahead
r.mu.Unlock()
r.t.cl.lock()
defer r.t.cl.unlock()
r.posChanged()
}
func (r *reader) readable(off int64) (ret bool) {
if r.t.closed.IsSet() {
return true
}
req, ok := r.t.offsetRequest(r.torrentOffset(off))
if !ok {
panic(off)
}
if r.responsive {
return r.t.haveChunk(req)
}
return r.t.pieceComplete(pieceIndex(req.Index))
}
// How many bytes are available to read. Max is the most we could require.
func (r *reader) available(off, max int64) (ret int64) {
off += r.offset
for max > 0 {
req, ok := r.t.offsetRequest(off)
if !ok {
break
}
if !r.t.haveChunk(req) {
break
}
len1 := int64(req.Length) - (off - r.t.requestOffset(req))
max -= len1
ret += len1
off += len1
}
// Ensure that ret hasn't exceeded our original max.
if max < 0 {
ret += max
}
return
}
func (r *reader) waitReadable(off int64) {
// We may have been sent back here because we were told we could read but
// it failed.
r.t.cl.event.Wait()
}
// Calculates the pieces this reader wants downloaded, ignoring the cached
// value at r.pieces.
func (r *reader) piecesUncached() (ret pieceRange) {
ra := r.readahead
if ra < 1 {
// Needs to be at least 1, because [x, x) means we don't want
// anything.
ra = 1
}
if ra > r.length-r.pos {
ra = r.length - r.pos
}
ret.begin, ret.end = r.t.byteRegionPieces(r.torrentOffset(r.pos), ra)
return
}
func (r *reader) Read(b []byte) (n int, err error) {
return r.ReadContext(context.Background(), b)
}
func (r *reader) ReadContext(ctx context.Context, b []byte) (n int, err error) {
// This is set under the Client lock if the Context is canceled.
var ctxErr error
if ctx.Done() != nil {
ctx, cancel := context.WithCancel(ctx)
// Abort the goroutine when the function returns.
defer cancel()
go func() {
<-ctx.Done()
r.t.cl.lock()
ctxErr = ctx.Err()
r.t.tickleReaders()
r.t.cl.unlock()
}()
}
// Hmmm, if a Read gets stuck, this means you can't change position for
// other purposes. That seems reasonable, but unusual.
r.opMu.Lock()
defer r.opMu.Unlock()
for len(b) != 0 {
var n1 int
n1, err = r.readOnceAt(b, r.pos, &ctxErr)
if n1 == 0 {
if err == nil {
panic("expected error")
}
break
}
b = b[n1:]
n += n1
r.mu.Lock()
r.pos += int64(n1)
r.posChanged()
r.mu.Unlock()
}
if r.pos >= r.length {
err = io.EOF
} else if err == io.EOF {
err = io.ErrUnexpectedEOF
}
return
}
// Wait until some data should be available to read. Tickles the client if it
// isn't. Returns how much should be readable without blocking.
func (r *reader) waitAvailable(pos, wanted int64, ctxErr *error) (avail int64) {
r.t.cl.lock()
defer r.t.cl.unlock()
for !r.readable(pos) && *ctxErr == nil {
r.waitReadable(pos)
}
return r.available(pos, wanted)
}
func (r *reader) torrentOffset(readerPos int64) int64 {
return r.offset + readerPos
}
// Performs at most one successful read to torrent storage.
func (r *reader) readOnceAt(b []byte, pos int64, ctxErr *error) (n int, err error) {
if pos >= r.length {
err = io.EOF
return
}
for {
avail := r.waitAvailable(pos, int64(len(b)), ctxErr)
if avail == 0 {
if r.t.closed.IsSet() {
err = errors.New("torrent closed")
return
}
if *ctxErr != nil {
err = *ctxErr
return
}
}
pi := peer_protocol.Integer(r.torrentOffset(pos) / r.t.info.PieceLength)
ip := r.t.info.Piece(int(pi))
po := r.torrentOffset(pos) % r.t.info.PieceLength
b1 := missinggo.LimitLen(b, ip.Length()-po, avail)
n, err = r.t.readAt(b1, r.torrentOffset(pos))
if n != 0 {
err = nil
return
}
r.t.cl.lock()
// TODO: Just reset pieces in the readahead window. This might help
// prevent thrashing with small caches and file and piece priorities.
log.Printf("error reading torrent %q piece %d offset %d, %d bytes: %s", r.t, pi, po, len(b1), err)
r.t.updateAllPieceCompletions()
r.t.updateAllPiecePriorities()
r.t.cl.unlock()
}
}
func (r *reader) Close() error {
r.t.cl.lock()
defer r.t.cl.unlock()
r.t.deleteReader(r)
return nil
}
func (r *reader) posChanged() {
to := r.piecesUncached()
from := r.pieces
if to == from {
return
}
r.pieces = to
// log.Printf("reader pos changed %v->%v", from, to)
r.t.readerPosChanged(from, to)
}
func (r *reader) Seek(off int64, whence int) (ret int64, err error) {
r.opMu.Lock()
defer r.opMu.Unlock()
r.mu.Lock()
defer r.mu.Unlock()
switch whence {
case io.SeekStart:
r.pos = off
case io.SeekCurrent:
r.pos += off
case io.SeekEnd:
r.pos = r.length + off
default:
err = errors.New("bad whence")
}
ret = r.pos
r.posChanged()
return
}

181
vendor/github.com/anacrolix/torrent/socket.go generated vendored Normal file
View File

@@ -0,0 +1,181 @@
package torrent
import (
"context"
"fmt"
"net"
"net/url"
"strconv"
"strings"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/missinggo/perf"
"golang.org/x/net/proxy"
)
type dialer interface {
dial(_ context.Context, addr string) (net.Conn, error)
}
type socket interface {
net.Listener
dialer
}
func getProxyDialer(proxyURL string) (proxy.Dialer, error) {
fixedURL, err := url.Parse(proxyURL)
if err != nil {
return nil, err
}
return proxy.FromURL(fixedURL, proxy.Direct)
}
func listen(network, addr, proxyURL string, f firewallCallback) (socket, error) {
if isTcpNetwork(network) {
return listenTcp(network, addr, proxyURL)
} else if isUtpNetwork(network) {
return listenUtp(network, addr, proxyURL, f)
} else {
panic(fmt.Sprintf("unknown network %q", network))
}
}
func isTcpNetwork(s string) bool {
return strings.Contains(s, "tcp")
}
func isUtpNetwork(s string) bool {
return strings.Contains(s, "utp") || strings.Contains(s, "udp")
}
func listenTcp(network, address, proxyURL string) (s socket, err error) {
l, err := net.Listen(network, address)
if err != nil {
return
}
defer func() {
if err != nil {
l.Close()
}
}()
// If we don't need the proxy - then we should return default net.Dialer,
// otherwise, let's try to parse the proxyURL and return proxy.Dialer
if len(proxyURL) != 0 {
// TODO: The error should be propagated, as proxy may be in use for
// security or privacy reasons. Also just pass proxy.Dialer in from
// the Config.
if dialer, err := getProxyDialer(proxyURL); err == nil {
return tcpSocket{l, func(ctx context.Context, addr string) (conn net.Conn, err error) {
defer perf.ScopeTimerErr(&err)()
return dialer.Dial(network, addr)
}}, nil
}
}
dialer := net.Dialer{}
return tcpSocket{l, func(ctx context.Context, addr string) (conn net.Conn, err error) {
defer perf.ScopeTimerErr(&err)()
return dialer.DialContext(ctx, network, addr)
}}, nil
}
type tcpSocket struct {
net.Listener
d func(ctx context.Context, addr string) (net.Conn, error)
}
func (me tcpSocket) dial(ctx context.Context, addr string) (net.Conn, error) {
return me.d(ctx, addr)
}
func setPort(addr string, port int) string {
host, _, err := net.SplitHostPort(addr)
if err != nil {
panic(err)
}
return net.JoinHostPort(host, strconv.FormatInt(int64(port), 10))
}
func listenAll(networks []string, getHost func(string) string, port int, proxyURL string, f firewallCallback) ([]socket, error) {
if len(networks) == 0 {
return nil, nil
}
var nahs []networkAndHost
for _, n := range networks {
nahs = append(nahs, networkAndHost{n, getHost(n)})
}
for {
ss, retry, err := listenAllRetry(nahs, port, proxyURL, f)
if !retry {
return ss, err
}
}
}
type networkAndHost struct {
Network string
Host string
}
func listenAllRetry(nahs []networkAndHost, port int, proxyURL string, f firewallCallback) (ss []socket, retry bool, err error) {
ss = make([]socket, 1, len(nahs))
portStr := strconv.FormatInt(int64(port), 10)
ss[0], err = listen(nahs[0].Network, net.JoinHostPort(nahs[0].Host, portStr), proxyURL, f)
if err != nil {
return nil, false, fmt.Errorf("first listen: %s", err)
}
defer func() {
if err != nil || retry {
for _, s := range ss {
s.Close()
}
ss = nil
}
}()
portStr = strconv.FormatInt(int64(missinggo.AddrPort(ss[0].Addr())), 10)
for _, nah := range nahs[1:] {
s, err := listen(nah.Network, net.JoinHostPort(nah.Host, portStr), proxyURL, f)
if err != nil {
return ss,
missinggo.IsAddrInUse(err) && port == 0,
fmt.Errorf("subsequent listen: %s", err)
}
ss = append(ss, s)
}
return
}
type firewallCallback func(net.Addr) bool
func listenUtp(network, addr, proxyURL string, fc firewallCallback) (s socket, err error) {
us, err := NewUtpSocket(network, addr, fc)
if err != nil {
return
}
// If we don't need the proxy - then we should return default net.Dialer,
// otherwise, let's try to parse the proxyURL and return proxy.Dialer
if len(proxyURL) != 0 {
if dialer, err := getProxyDialer(proxyURL); err == nil {
return utpSocketSocket{us, network, dialer}, nil
}
}
return utpSocketSocket{us, network, nil}, nil
}
type utpSocketSocket struct {
utpSocket
network string
d proxy.Dialer
}
func (me utpSocketSocket) dial(ctx context.Context, addr string) (conn net.Conn, err error) {
defer perf.ScopeTimerErr(&err)()
if me.d != nil {
return me.d.Dial(me.network, addr)
}
return me.utpSocket.DialContext(ctx, me.network, addr)
}

48
vendor/github.com/anacrolix/torrent/spec.go generated vendored Normal file
View File

@@ -0,0 +1,48 @@
package torrent
import (
"github.com/anacrolix/torrent/metainfo"
"github.com/anacrolix/torrent/storage"
)
// Specifies a new torrent for adding to a client. There are helpers for
// magnet URIs and torrent metainfo files.
type TorrentSpec struct {
// The tiered tracker URIs.
Trackers [][]string
InfoHash metainfo.Hash
InfoBytes []byte
// The name to use if the Name field from the Info isn't available.
DisplayName string
// The chunk size to use for outbound requests. Defaults to 16KiB if not
// set.
ChunkSize int
Storage storage.ClientImpl
}
func TorrentSpecFromMagnetURI(uri string) (spec *TorrentSpec, err error) {
m, err := metainfo.ParseMagnetURI(uri)
if err != nil {
return
}
spec = &TorrentSpec{
Trackers: [][]string{m.Trackers},
DisplayName: m.DisplayName,
InfoHash: m.InfoHash,
}
return
}
func TorrentSpecFromMetaInfo(mi *metainfo.MetaInfo) (spec *TorrentSpec) {
info, _ := mi.UnmarshalInfo()
spec = &TorrentSpec{
Trackers: mi.AnnounceList,
InfoBytes: mi.InfoBytes,
DisplayName: info.Name,
InfoHash: mi.HashInfoBytes(),
}
if spec.Trackers == nil && mi.Announce != "" {
spec.Trackers = [][]string{{mi.Announce}}
}
return
}

View File

@@ -0,0 +1,93 @@
package storage
import (
"encoding/binary"
"os"
"path/filepath"
"time"
"github.com/boltdb/bolt"
"github.com/anacrolix/torrent/metainfo"
)
const (
boltDbCompleteValue = "c"
boltDbIncompleteValue = "i"
)
var (
completionBucketKey = []byte("completion")
)
type boltPieceCompletion struct {
db *bolt.DB
}
var _ PieceCompletion = (*boltPieceCompletion)(nil)
func NewBoltPieceCompletion(dir string) (ret PieceCompletion, err error) {
os.MkdirAll(dir, 0770)
p := filepath.Join(dir, ".torrent.bolt.db")
db, err := bolt.Open(p, 0660, &bolt.Options{
Timeout: time.Second,
})
if err != nil {
return
}
db.NoSync = true
ret = &boltPieceCompletion{db}
return
}
func (me boltPieceCompletion) Get(pk metainfo.PieceKey) (cn Completion, err error) {
err = me.db.View(func(tx *bolt.Tx) error {
cb := tx.Bucket(completionBucketKey)
if cb == nil {
return nil
}
ih := cb.Bucket(pk.InfoHash[:])
if ih == nil {
return nil
}
var key [4]byte
binary.BigEndian.PutUint32(key[:], uint32(pk.Index))
cn.Ok = true
switch string(ih.Get(key[:])) {
case boltDbCompleteValue:
cn.Complete = true
case boltDbIncompleteValue:
cn.Complete = false
default:
cn.Ok = false
}
return nil
})
return
}
func (me boltPieceCompletion) Set(pk metainfo.PieceKey, b bool) error {
return me.db.Update(func(tx *bolt.Tx) error {
c, err := tx.CreateBucketIfNotExists(completionBucketKey)
if err != nil {
return err
}
ih, err := c.CreateBucketIfNotExists(pk.InfoHash[:])
if err != nil {
return err
}
var key [4]byte
binary.BigEndian.PutUint32(key[:], uint32(pk.Index))
return ih.Put(key[:], []byte(func() string {
if b {
return boltDbCompleteValue
} else {
return boltDbIncompleteValue
}
}()))
})
}
func (me *boltPieceCompletion) Close() error {
return me.db.Close()
}

View File

@@ -0,0 +1,101 @@
package storage
import (
"encoding/binary"
"github.com/anacrolix/missinggo/x"
"github.com/boltdb/bolt"
"github.com/anacrolix/torrent/metainfo"
)
type boltDBPiece struct {
db *bolt.DB
p metainfo.Piece
ih metainfo.Hash
key [24]byte
}
var (
_ PieceImpl = (*boltDBPiece)(nil)
dataBucketKey = []byte("data")
)
func (me *boltDBPiece) pc() PieceCompletionGetSetter {
return boltPieceCompletion{me.db}
}
func (me *boltDBPiece) pk() metainfo.PieceKey {
return metainfo.PieceKey{me.ih, me.p.Index()}
}
func (me *boltDBPiece) Completion() Completion {
c, err := me.pc().Get(me.pk())
x.Pie(err)
return c
}
func (me *boltDBPiece) MarkComplete() error {
return me.pc().Set(me.pk(), true)
}
func (me *boltDBPiece) MarkNotComplete() error {
return me.pc().Set(me.pk(), false)
}
func (me *boltDBPiece) ReadAt(b []byte, off int64) (n int, err error) {
err = me.db.View(func(tx *bolt.Tx) error {
db := tx.Bucket(dataBucketKey)
if db == nil {
return nil
}
ci := off / chunkSize
off %= chunkSize
for len(b) != 0 {
ck := me.chunkKey(int(ci))
_b := db.Get(ck[:])
if len(_b) != chunkSize {
break
}
n1 := copy(b, _b[off:])
off = 0
ci++
b = b[n1:]
n += n1
}
return nil
})
return
}
func (me *boltDBPiece) chunkKey(index int) (ret [26]byte) {
copy(ret[:], me.key[:])
binary.BigEndian.PutUint16(ret[24:], uint16(index))
return
}
func (me *boltDBPiece) WriteAt(b []byte, off int64) (n int, err error) {
err = me.db.Update(func(tx *bolt.Tx) error {
db, err := tx.CreateBucketIfNotExists(dataBucketKey)
if err != nil {
return err
}
ci := off / chunkSize
off %= chunkSize
for len(b) != 0 {
_b := make([]byte, chunkSize)
ck := me.chunkKey(int(ci))
copy(_b, db.Get(ck[:]))
n1 := copy(_b[off:], b)
db.Put(ck[:], _b)
if n1 > len(b) {
break
}
b = b[n1:]
off = 0
ci++
n += n1
}
return nil
})
return
}

57
vendor/github.com/anacrolix/torrent/storage/boltdb.go generated vendored Normal file
View File

@@ -0,0 +1,57 @@
package storage
import (
"encoding/binary"
"path/filepath"
"time"
"github.com/anacrolix/missinggo/expect"
"github.com/boltdb/bolt"
"github.com/anacrolix/torrent/metainfo"
)
const (
// Chosen to match the usual chunk size in a torrent client. This way,
// most chunk writes are to exactly one full item in bolt DB.
chunkSize = 1 << 14
)
type boltDBClient struct {
db *bolt.DB
}
type boltDBTorrent struct {
cl *boltDBClient
ih metainfo.Hash
}
func NewBoltDB(filePath string) ClientImpl {
db, err := bolt.Open(filepath.Join(filePath, "bolt.db"), 0600, &bolt.Options{
Timeout: time.Second,
})
expect.Nil(err)
db.NoSync = true
return &boltDBClient{db}
}
func (me *boltDBClient) Close() error {
return me.db.Close()
}
func (me *boltDBClient) OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (TorrentImpl, error) {
return &boltDBTorrent{me, infoHash}, nil
}
func (me *boltDBTorrent) Piece(p metainfo.Piece) PieceImpl {
ret := &boltDBPiece{
p: p,
db: me.cl.db,
ih: me.ih,
}
copy(ret.key[:], me.ih[:])
binary.BigEndian.PutUint32(ret.key[20:], uint32(p.Index()))
return ret
}
func (boltDBTorrent) Close() error { return nil }

View File

@@ -0,0 +1,27 @@
package storage
import (
"log"
"github.com/anacrolix/torrent/metainfo"
)
type PieceCompletionGetSetter interface {
Get(metainfo.PieceKey) (Completion, error)
Set(_ metainfo.PieceKey, complete bool) error
}
// Implementations track the completion of pieces. It must be concurrent-safe.
type PieceCompletion interface {
PieceCompletionGetSetter
Close() error
}
func pieceCompletionForDir(dir string) (ret PieceCompletion) {
ret, err := NewBoltPieceCompletion(dir)
if err != nil {
log.Printf("couldn't open piece completion db in %q: %s", dir, err)
ret = NewMapPieceCompletion()
}
return
}

View File

@@ -0,0 +1,37 @@
package storage
import (
"sync"
"github.com/anacrolix/torrent/metainfo"
)
type mapPieceCompletion struct {
mu sync.Mutex
m map[metainfo.PieceKey]bool
}
var _ PieceCompletion = (*mapPieceCompletion)(nil)
func NewMapPieceCompletion() PieceCompletion {
return &mapPieceCompletion{m: make(map[metainfo.PieceKey]bool)}
}
func (*mapPieceCompletion) Close() error { return nil }
func (me *mapPieceCompletion) Get(pk metainfo.PieceKey) (c Completion, err error) {
me.mu.Lock()
defer me.mu.Unlock()
c.Complete, c.Ok = me.m[pk]
return
}
func (me *mapPieceCompletion) Set(pk metainfo.PieceKey, b bool) error {
me.mu.Lock()
defer me.mu.Unlock()
if me.m == nil {
me.m = make(map[metainfo.PieceKey]bool)
}
me.m[pk] = b
return nil
}

2
vendor/github.com/anacrolix/torrent/storage/doc.go generated vendored Normal file
View File

@@ -0,0 +1,2 @@
// Package storage implements storage backends for package torrent.
package storage

219
vendor/github.com/anacrolix/torrent/storage/file.go generated vendored Normal file
View File

@@ -0,0 +1,219 @@
package storage
import (
"io"
"os"
"path/filepath"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/torrent/metainfo"
)
// File-based storage for torrents, that isn't yet bound to a particular
// torrent.
type fileClientImpl struct {
baseDir string
pathMaker func(baseDir string, info *metainfo.Info, infoHash metainfo.Hash) string
pc PieceCompletion
}
// The Default path maker just returns the current path
func defaultPathMaker(baseDir string, info *metainfo.Info, infoHash metainfo.Hash) string {
return baseDir
}
func infoHashPathMaker(baseDir string, info *metainfo.Info, infoHash metainfo.Hash) string {
return filepath.Join(baseDir, infoHash.HexString())
}
// All Torrent data stored in this baseDir
func NewFile(baseDir string) ClientImpl {
return NewFileWithCompletion(baseDir, pieceCompletionForDir(baseDir))
}
func NewFileWithCompletion(baseDir string, completion PieceCompletion) ClientImpl {
return newFileWithCustomPathMakerAndCompletion(baseDir, nil, completion)
}
// File storage with data partitioned by infohash.
func NewFileByInfoHash(baseDir string) ClientImpl {
return NewFileWithCustomPathMaker(baseDir, infoHashPathMaker)
}
// Allows passing a function to determine the path for storing torrent data
func NewFileWithCustomPathMaker(baseDir string, pathMaker func(baseDir string, info *metainfo.Info, infoHash metainfo.Hash) string) ClientImpl {
return newFileWithCustomPathMakerAndCompletion(baseDir, pathMaker, pieceCompletionForDir(baseDir))
}
func newFileWithCustomPathMakerAndCompletion(baseDir string, pathMaker func(baseDir string, info *metainfo.Info, infoHash metainfo.Hash) string, completion PieceCompletion) ClientImpl {
if pathMaker == nil {
pathMaker = defaultPathMaker
}
return &fileClientImpl{
baseDir: baseDir,
pathMaker: pathMaker,
pc: completion,
}
}
func (me *fileClientImpl) Close() error {
return me.pc.Close()
}
func (fs *fileClientImpl) OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (TorrentImpl, error) {
dir := fs.pathMaker(fs.baseDir, info, infoHash)
err := CreateNativeZeroLengthFiles(info, dir)
if err != nil {
return nil, err
}
return &fileTorrentImpl{
dir,
info,
infoHash,
fs.pc,
}, nil
}
type fileTorrentImpl struct {
dir string
info *metainfo.Info
infoHash metainfo.Hash
completion PieceCompletion
}
func (fts *fileTorrentImpl) Piece(p metainfo.Piece) PieceImpl {
// Create a view onto the file-based torrent storage.
_io := fileTorrentImplIO{fts}
// Return the appropriate segments of this.
return &filePieceImpl{
fts,
p,
missinggo.NewSectionWriter(_io, p.Offset(), p.Length()),
io.NewSectionReader(_io, p.Offset(), p.Length()),
}
}
func (fs *fileTorrentImpl) Close() error {
return nil
}
// Creates natives files for any zero-length file entries in the info. This is
// a helper for file-based storages, which don't address or write to zero-
// length files because they have no corresponding pieces.
func CreateNativeZeroLengthFiles(info *metainfo.Info, dir string) (err error) {
for _, fi := range info.UpvertedFiles() {
if fi.Length != 0 {
continue
}
name := filepath.Join(append([]string{dir, info.Name}, fi.Path...)...)
os.MkdirAll(filepath.Dir(name), 0777)
var f io.Closer
f, err = os.Create(name)
if err != nil {
break
}
f.Close()
}
return
}
// Exposes file-based storage of a torrent, as one big ReadWriterAt.
type fileTorrentImplIO struct {
fts *fileTorrentImpl
}
// Returns EOF on short or missing file.
func (fst *fileTorrentImplIO) readFileAt(fi metainfo.FileInfo, b []byte, off int64) (n int, err error) {
f, err := os.Open(fst.fts.fileInfoName(fi))
if os.IsNotExist(err) {
// File missing is treated the same as a short file.
err = io.EOF
return
}
if err != nil {
return
}
defer f.Close()
// Limit the read to within the expected bounds of this file.
if int64(len(b)) > fi.Length-off {
b = b[:fi.Length-off]
}
for off < fi.Length && len(b) != 0 {
n1, err1 := f.ReadAt(b, off)
b = b[n1:]
n += n1
off += int64(n1)
if n1 == 0 {
err = err1
break
}
}
return
}
// Only returns EOF at the end of the torrent. Premature EOF is ErrUnexpectedEOF.
func (fst fileTorrentImplIO) ReadAt(b []byte, off int64) (n int, err error) {
for _, fi := range fst.fts.info.UpvertedFiles() {
for off < fi.Length {
n1, err1 := fst.readFileAt(fi, b, off)
n += n1
off += int64(n1)
b = b[n1:]
if len(b) == 0 {
// Got what we need.
return
}
if n1 != 0 {
// Made progress.
continue
}
err = err1
if err == io.EOF {
// Lies.
err = io.ErrUnexpectedEOF
}
return
}
off -= fi.Length
}
err = io.EOF
return
}
func (fst fileTorrentImplIO) WriteAt(p []byte, off int64) (n int, err error) {
for _, fi := range fst.fts.info.UpvertedFiles() {
if off >= fi.Length {
off -= fi.Length
continue
}
n1 := len(p)
if int64(n1) > fi.Length-off {
n1 = int(fi.Length - off)
}
name := fst.fts.fileInfoName(fi)
os.MkdirAll(filepath.Dir(name), 0777)
var f *os.File
f, err = os.OpenFile(name, os.O_WRONLY|os.O_CREATE, 0666)
if err != nil {
return
}
n1, err = f.WriteAt(p[:n1], off)
// TODO: On some systems, write errors can be delayed until the Close.
f.Close()
if err != nil {
return
}
n += n1
off = 0
p = p[n1:]
if len(p) == 0 {
break
}
}
return
}
func (fts *fileTorrentImpl) fileInfoName(fi metainfo.FileInfo) string {
return filepath.Join(append([]string{fts.dir, fts.info.Name}, fi.Path...)...)
}

View File

@@ -0,0 +1,29 @@
package storage
import "github.com/anacrolix/torrent/metainfo"
func extentCompleteRequiredLengths(info *metainfo.Info, off, n int64) (ret []metainfo.FileInfo) {
if n == 0 {
return
}
for _, fi := range info.UpvertedFiles() {
if off >= fi.Length {
off -= fi.Length
continue
}
n1 := n
if off+n1 > fi.Length {
n1 = fi.Length - off
}
ret = append(ret, metainfo.FileInfo{
Path: fi.Path,
Length: off + n1,
})
n -= n1
if n == 0 {
return
}
off = 0
}
panic("extent exceeds torrent bounds")
}

View File

@@ -0,0 +1,53 @@
package storage
import (
"io"
"log"
"os"
"github.com/anacrolix/torrent/metainfo"
)
type filePieceImpl struct {
*fileTorrentImpl
p metainfo.Piece
io.WriterAt
io.ReaderAt
}
var _ PieceImpl = (*filePieceImpl)(nil)
func (me *filePieceImpl) pieceKey() metainfo.PieceKey {
return metainfo.PieceKey{me.infoHash, me.p.Index()}
}
func (fs *filePieceImpl) Completion() Completion {
c, err := fs.completion.Get(fs.pieceKey())
if err != nil {
log.Printf("error getting piece completion: %s", err)
c.Ok = false
return c
}
// If it's allegedly complete, check that its constituent files have the
// necessary length.
for _, fi := range extentCompleteRequiredLengths(fs.p.Info, fs.p.Offset(), fs.p.Length()) {
s, err := os.Stat(fs.fileInfoName(fi))
if err != nil || s.Size() < fi.Length {
c.Complete = false
break
}
}
if !c.Complete {
// The completion was wrong, fix it.
fs.completion.Set(fs.pieceKey(), false)
}
return c
}
func (fs *filePieceImpl) MarkComplete() error {
return fs.completion.Set(fs.pieceKey(), true)
}
func (fs *filePieceImpl) MarkNotComplete() error {
return fs.completion.Set(fs.pieceKey(), false)
}

View File

@@ -0,0 +1,40 @@
package storage
import (
"io"
"github.com/anacrolix/torrent/metainfo"
)
// Represents data storage for an unspecified torrent.
type ClientImpl interface {
OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (TorrentImpl, error)
Close() error
}
// Data storage bound to a torrent.
type TorrentImpl interface {
Piece(metainfo.Piece) PieceImpl
Close() error
}
// Interacts with torrent piece data.
type PieceImpl interface {
// These interfaces are not as strict as normally required. They can
// assume that the parameters are appropriate for the dimensions of the
// piece.
io.ReaderAt
io.WriterAt
// Called when the client believes the piece data will pass a hash check.
// The storage can move or mark the piece data as read-only as it sees
// fit.
MarkComplete() error
MarkNotComplete() error
// Returns true if the piece is complete.
Completion() Completion
}
type Completion struct {
Complete bool
Ok bool
}

161
vendor/github.com/anacrolix/torrent/storage/mmap.go generated vendored Normal file
View File

@@ -0,0 +1,161 @@
package storage
import (
"errors"
"fmt"
"io"
"os"
"path/filepath"
"github.com/anacrolix/missinggo"
"github.com/edsrzf/mmap-go"
"github.com/anacrolix/torrent/metainfo"
"github.com/anacrolix/torrent/mmap_span"
)
type mmapClientImpl struct {
baseDir string
pc PieceCompletion
}
func NewMMap(baseDir string) ClientImpl {
return NewMMapWithCompletion(baseDir, pieceCompletionForDir(baseDir))
}
func NewMMapWithCompletion(baseDir string, completion PieceCompletion) ClientImpl {
return &mmapClientImpl{
baseDir: baseDir,
pc: completion,
}
}
func (s *mmapClientImpl) OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (t TorrentImpl, err error) {
span, err := mMapTorrent(info, s.baseDir)
t = &mmapTorrentStorage{
infoHash: infoHash,
span: span,
pc: s.pc,
}
return
}
func (s *mmapClientImpl) Close() error {
return s.pc.Close()
}
type mmapTorrentStorage struct {
infoHash metainfo.Hash
span *mmap_span.MMapSpan
pc PieceCompletion
}
func (ts *mmapTorrentStorage) Piece(p metainfo.Piece) PieceImpl {
return mmapStoragePiece{
pc: ts.pc,
p: p,
ih: ts.infoHash,
ReaderAt: io.NewSectionReader(ts.span, p.Offset(), p.Length()),
WriterAt: missinggo.NewSectionWriter(ts.span, p.Offset(), p.Length()),
}
}
func (ts *mmapTorrentStorage) Close() error {
ts.pc.Close()
return ts.span.Close()
}
type mmapStoragePiece struct {
pc PieceCompletion
p metainfo.Piece
ih metainfo.Hash
io.ReaderAt
io.WriterAt
}
func (me mmapStoragePiece) pieceKey() metainfo.PieceKey {
return metainfo.PieceKey{me.ih, me.p.Index()}
}
func (sp mmapStoragePiece) Completion() Completion {
c, _ := sp.pc.Get(sp.pieceKey())
return c
}
func (sp mmapStoragePiece) MarkComplete() error {
sp.pc.Set(sp.pieceKey(), true)
return nil
}
func (sp mmapStoragePiece) MarkNotComplete() error {
sp.pc.Set(sp.pieceKey(), false)
return nil
}
func mMapTorrent(md *metainfo.Info, location string) (mms *mmap_span.MMapSpan, err error) {
mms = &mmap_span.MMapSpan{}
defer func() {
if err != nil {
mms.Close()
}
}()
for _, miFile := range md.UpvertedFiles() {
fileName := filepath.Join(append([]string{location, md.Name}, miFile.Path...)...)
var mm mmap.MMap
mm, err = mmapFile(fileName, miFile.Length)
if err != nil {
err = fmt.Errorf("file %q: %s", miFile.DisplayPath(md), err)
return
}
if mm != nil {
mms.Append(mm)
}
}
return
}
func mmapFile(name string, size int64) (ret mmap.MMap, err error) {
dir := filepath.Dir(name)
err = os.MkdirAll(dir, 0777)
if err != nil {
err = fmt.Errorf("making directory %q: %s", dir, err)
return
}
var file *os.File
file, err = os.OpenFile(name, os.O_CREATE|os.O_RDWR, 0666)
if err != nil {
return
}
defer file.Close()
var fi os.FileInfo
fi, err = file.Stat()
if err != nil {
return
}
if fi.Size() < size {
// I think this is necessary on HFS+. Maybe Linux will SIGBUS too if
// you overmap a file but I'm not sure.
err = file.Truncate(size)
if err != nil {
return
}
}
if size == 0 {
// Can't mmap() regions with length 0.
return
}
intLen := int(size)
if int64(intLen) != size {
err = errors.New("size too large for system")
return
}
ret, err = mmap.MapRegion(file, intLen, mmap.RDWR, 0, 0)
if err != nil {
err = fmt.Errorf("error mapping region: %s", err)
return
}
if int64(len(ret)) != size {
panic(len(ret))
}
return
}

View File

@@ -0,0 +1,77 @@
package storage
import (
"path"
"github.com/anacrolix/missinggo/resource"
"github.com/anacrolix/torrent/metainfo"
)
type piecePerResource struct {
p resource.Provider
}
func NewResourcePieces(p resource.Provider) ClientImpl {
return &piecePerResource{
p: p,
}
}
func (s *piecePerResource) OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (TorrentImpl, error) {
return s, nil
}
func (s *piecePerResource) Close() error {
return nil
}
func (s *piecePerResource) Piece(p metainfo.Piece) PieceImpl {
completed, err := s.p.NewInstance(path.Join("completed", p.Hash().HexString()))
if err != nil {
panic(err)
}
incomplete, err := s.p.NewInstance(path.Join("incomplete", p.Hash().HexString()))
if err != nil {
panic(err)
}
return piecePerResourcePiece{
p: p,
c: completed,
i: incomplete,
}
}
type piecePerResourcePiece struct {
p metainfo.Piece
c resource.Instance
i resource.Instance
}
func (s piecePerResourcePiece) Completion() Completion {
fi, err := s.c.Stat()
return Completion{
Complete: err == nil && fi.Size() == s.p.Length(),
Ok: true,
}
}
func (s piecePerResourcePiece) MarkComplete() error {
return resource.Move(s.i, s.c)
}
func (s piecePerResourcePiece) MarkNotComplete() error {
return s.c.Delete()
}
func (s piecePerResourcePiece) ReadAt(b []byte, off int64) (int, error) {
if s.Completion().Complete {
return s.c.ReadAt(b, off)
} else {
return s.i.ReadAt(b, off)
}
}
func (s piecePerResourcePiece) WriteAt(b []byte, off int64) (n int, err error) {
return s.i.WriteAt(b, off)
}

View File

@@ -0,0 +1,56 @@
// +build cgo
package storage
import (
"database/sql"
"path/filepath"
_ "github.com/mattn/go-sqlite3"
"github.com/anacrolix/torrent/metainfo"
)
type sqlitePieceCompletion struct {
db *sql.DB
}
var _ PieceCompletion = (*sqlitePieceCompletion)(nil)
func NewSqlitePieceCompletion(dir string) (ret *sqlitePieceCompletion, err error) {
p := filepath.Join(dir, ".torrent.db")
db, err := sql.Open("sqlite3", p)
if err != nil {
return
}
db.SetMaxOpenConns(1)
db.Exec(`PRAGMA journal_mode=WAL`)
db.Exec(`PRAGMA synchronous=1`)
_, err = db.Exec(`create table if not exists piece_completion(infohash, "index", complete, unique(infohash, "index"))`)
if err != nil {
db.Close()
return
}
ret = &sqlitePieceCompletion{db}
return
}
func (me *sqlitePieceCompletion) Get(pk metainfo.PieceKey) (c Completion, err error) {
row := me.db.QueryRow(`select complete from piece_completion where infohash=? and "index"=?`, pk.InfoHash.HexString(), pk.Index)
err = row.Scan(&c.Complete)
if err == sql.ErrNoRows {
err = nil
} else if err == nil {
c.Ok = true
}
return
}
func (me *sqlitePieceCompletion) Set(pk metainfo.PieceKey, b bool) error {
_, err := me.db.Exec(`insert or replace into piece_completion(infohash, "index", complete) values(?, ?, ?)`, pk.InfoHash.HexString(), pk.Index, b)
return err
}
func (me *sqlitePieceCompletion) Close() error {
return me.db.Close()
}

View File

@@ -0,0 +1,86 @@
package storage
import (
"io"
"os"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/torrent/metainfo"
)
type Client struct {
ci ClientImpl
}
func NewClient(cl ClientImpl) *Client {
return &Client{cl}
}
func (cl Client) OpenTorrent(info *metainfo.Info, infoHash metainfo.Hash) (*Torrent, error) {
t, err := cl.ci.OpenTorrent(info, infoHash)
return &Torrent{t}, err
}
type Torrent struct {
TorrentImpl
}
func (t Torrent) Piece(p metainfo.Piece) Piece {
return Piece{t.TorrentImpl.Piece(p), p}
}
type Piece struct {
PieceImpl
mip metainfo.Piece
}
func (p Piece) WriteAt(b []byte, off int64) (n int, err error) {
// Callers should not be writing to completed pieces, but it's too
// expensive to be checking this on every single write using uncached
// completions.
// c := p.Completion()
// if c.Ok && c.Complete {
// err = errors.New("piece already completed")
// return
// }
if off+int64(len(b)) > p.mip.Length() {
panic("write overflows piece")
}
b = missinggo.LimitLen(b, p.mip.Length()-off)
return p.PieceImpl.WriteAt(b, off)
}
func (p Piece) ReadAt(b []byte, off int64) (n int, err error) {
if off < 0 {
err = os.ErrInvalid
return
}
if off >= p.mip.Length() {
err = io.EOF
return
}
b = missinggo.LimitLen(b, p.mip.Length()-off)
if len(b) == 0 {
return
}
n, err = p.PieceImpl.ReadAt(b, off)
if n > len(b) {
panic(n)
}
off += int64(n)
if err == io.EOF && off < p.mip.Length() {
err = io.ErrUnexpectedEOF
}
if err == nil && off >= p.mip.Length() {
err = io.EOF
}
if n == 0 && err == nil {
err = io.ErrUnexpectedEOF
}
if off < p.mip.Length() && err != nil {
p.MarkNotComplete()
}
return
}

240
vendor/github.com/anacrolix/torrent/t.go generated vendored Normal file
View File

@@ -0,0 +1,240 @@
package torrent
import (
"strings"
"github.com/anacrolix/missinggo/pubsub"
"github.com/anacrolix/torrent/metainfo"
)
// The torrent's infohash. This is fixed and cannot change. It uniquely
// identifies a torrent.
func (t *Torrent) InfoHash() metainfo.Hash {
return t.infoHash
}
// Returns a channel that is closed when the info (.Info()) for the torrent
// has become available.
func (t *Torrent) GotInfo() <-chan struct{} {
t.cl.lock()
defer t.cl.unlock()
return t.gotMetainfo.C()
}
// Returns the metainfo info dictionary, or nil if it's not yet available.
func (t *Torrent) Info() *metainfo.Info {
t.cl.lock()
defer t.cl.unlock()
return t.info
}
// Returns a Reader bound to the torrent's data. All read calls block until
// the data requested is actually available.
func (t *Torrent) NewReader() Reader {
r := reader{
mu: t.cl.locker(),
t: t,
readahead: 5 * 1024 * 1024,
length: *t.length,
}
t.addReader(&r)
return &r
}
// Returns the state of pieces of the torrent. They are grouped into runs of
// same state. The sum of the state run lengths is the number of pieces
// in the torrent.
func (t *Torrent) PieceStateRuns() []PieceStateRun {
t.cl.lock()
defer t.cl.unlock()
return t.pieceStateRuns()
}
func (t *Torrent) PieceState(piece pieceIndex) PieceState {
t.cl.lock()
defer t.cl.unlock()
return t.pieceState(piece)
}
// The number of pieces in the torrent. This requires that the info has been
// obtained first.
func (t *Torrent) NumPieces() pieceIndex {
return t.numPieces()
}
// Get missing bytes count for specific piece.
func (t *Torrent) PieceBytesMissing(piece int) int64 {
t.cl.lock()
defer t.cl.unlock()
return int64(t.pieces[piece].bytesLeft())
}
// Drop the torrent from the client, and close it. It's always safe to do
// this. No data corruption can, or should occur to either the torrent's data,
// or connected peers.
func (t *Torrent) Drop() {
t.cl.lock()
t.cl.dropTorrent(t.infoHash)
t.cl.unlock()
}
// Number of bytes of the entire torrent we have completed. This is the sum of
// completed pieces, and dirtied chunks of incomplete pieces. Do not use this
// for download rate, as it can go down when pieces are lost or fail checks.
// Sample Torrent.Stats.DataBytesRead for actual file data download rate.
func (t *Torrent) BytesCompleted() int64 {
t.cl.rLock()
defer t.cl.rUnlock()
return t.bytesCompleted()
}
// The subscription emits as (int) the index of pieces as their state changes.
// A state change is when the PieceState for a piece alters in value.
func (t *Torrent) SubscribePieceStateChanges() *pubsub.Subscription {
return t.pieceStateChanges.Subscribe()
}
// Returns true if the torrent is currently being seeded. This occurs when the
// client is willing to upload without wanting anything in return.
func (t *Torrent) Seeding() bool {
t.cl.lock()
defer t.cl.unlock()
return t.seeding()
}
// Clobbers the torrent display name. The display name is used as the torrent
// name if the metainfo is not available.
func (t *Torrent) SetDisplayName(dn string) {
t.cl.lock()
defer t.cl.unlock()
t.setDisplayName(dn)
}
// The current working name for the torrent. Either the name in the info dict,
// or a display name given such as by the dn value in a magnet link, or "".
func (t *Torrent) Name() string {
t.cl.lock()
defer t.cl.unlock()
return t.name()
}
// The completed length of all the torrent data, in all its files. This is
// derived from the torrent info, when it is available.
func (t *Torrent) Length() int64 {
return *t.length
}
// Returns a run-time generated metainfo for the torrent that includes the
// info bytes and announce-list as currently known to the client.
func (t *Torrent) Metainfo() metainfo.MetaInfo {
t.cl.lock()
defer t.cl.unlock()
return t.newMetaInfo()
}
func (t *Torrent) addReader(r *reader) {
t.cl.lock()
defer t.cl.unlock()
if t.readers == nil {
t.readers = make(map[*reader]struct{})
}
t.readers[r] = struct{}{}
r.posChanged()
}
func (t *Torrent) deleteReader(r *reader) {
delete(t.readers, r)
t.readersChanged()
}
// Raise the priorities of pieces in the range [begin, end) to at least Normal
// priority. Piece indexes are not the same as bytes. Requires that the info
// has been obtained, see Torrent.Info and Torrent.GotInfo.
func (t *Torrent) DownloadPieces(begin, end pieceIndex) {
t.cl.lock()
defer t.cl.unlock()
t.downloadPiecesLocked(begin, end)
}
func (t *Torrent) downloadPiecesLocked(begin, end pieceIndex) {
for i := begin; i < end; i++ {
if t.pieces[i].priority.Raise(PiecePriorityNormal) {
t.updatePiecePriority(i)
}
}
}
func (t *Torrent) CancelPieces(begin, end pieceIndex) {
t.cl.lock()
defer t.cl.unlock()
t.cancelPiecesLocked(begin, end)
}
func (t *Torrent) cancelPiecesLocked(begin, end pieceIndex) {
for i := begin; i < end; i++ {
p := &t.pieces[i]
if p.priority == PiecePriorityNone {
continue
}
p.priority = PiecePriorityNone
t.updatePiecePriority(i)
}
}
func (t *Torrent) initFiles() {
var offset int64
t.files = new([]*File)
for _, fi := range t.info.UpvertedFiles() {
*t.files = append(*t.files, &File{
t,
strings.Join(append([]string{t.info.Name}, fi.Path...), "/"),
offset,
fi.Length,
fi,
PiecePriorityNone,
})
offset += fi.Length
}
}
// Returns handles to the files in the torrent. This requires that the Info is
// available first.
func (t *Torrent) Files() []*File {
return *t.files
}
func (t *Torrent) AddPeers(pp []Peer) {
cl := t.cl
cl.lock()
defer cl.unlock()
t.addPeers(pp)
}
// Marks the entire torrent for download. Requires the info first, see
// GotInfo. Sets piece priorities for historical reasons.
func (t *Torrent) DownloadAll() {
t.DownloadPieces(0, t.numPieces())
}
func (t *Torrent) String() string {
s := t.name()
if s == "" {
s = t.infoHash.HexString()
}
return s
}
func (t *Torrent) AddTrackers(announceList [][]string) {
t.cl.lock()
defer t.cl.unlock()
t.addTrackers(announceList)
}
func (t *Torrent) Piece(i pieceIndex) *Piece {
t.cl.lock()
defer t.cl.unlock()
return &t.pieces[i]
}

1791
vendor/github.com/anacrolix/torrent/torrent.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

14
vendor/github.com/anacrolix/torrent/torrent_stats.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
package torrent
type TorrentStats struct {
// Aggregates stats over all connections past and present. Some values may
// not have much meaning in the aggregate context.
ConnStats
// Ordered by expected descending quantities (if all is well).
TotalPeers int
PendingPeers int
ActivePeers int
ConnectedSeeders int
HalfOpenPeers int
}

View File

@@ -0,0 +1,7 @@
package tracker
import (
"expvar"
)
var vars = expvar.NewMap("tracker")

140
vendor/github.com/anacrolix/torrent/tracker/http.go generated vendored Normal file
View File

@@ -0,0 +1,140 @@
package tracker
import (
"bytes"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/missinggo/httptoo"
"github.com/anacrolix/torrent/bencode"
)
type httpResponse struct {
FailureReason string `bencode:"failure reason"`
Interval int32 `bencode:"interval"`
TrackerId string `bencode:"tracker id"`
Complete int32 `bencode:"complete"`
Incomplete int32 `bencode:"incomplete"`
Peers Peers `bencode:"peers"`
// BEP 7
Peers6 krpc.CompactIPv6NodeAddrs `bencode:"peers6"`
}
type Peers []Peer
func (me *Peers) UnmarshalBencode(b []byte) (err error) {
var _v interface{}
err = bencode.Unmarshal(b, &_v)
if err != nil {
return
}
switch v := _v.(type) {
case string:
vars.Add("http responses with string peers", 1)
var cnas krpc.CompactIPv4NodeAddrs
err = cnas.UnmarshalBinary([]byte(v))
if err != nil {
return
}
for _, cp := range cnas {
*me = append(*me, Peer{
IP: cp.IP[:],
Port: int(cp.Port),
})
}
return
case []interface{}:
vars.Add("http responses with list peers", 1)
for _, i := range v {
var p Peer
p.fromDictInterface(i.(map[string]interface{}))
*me = append(*me, p)
}
return
default:
vars.Add("http responses with unhandled peers type", 1)
err = fmt.Errorf("unsupported type: %T", _v)
return
}
}
func setAnnounceParams(_url *url.URL, ar *AnnounceRequest, opts Announce) {
q := _url.Query()
q.Set("info_hash", string(ar.InfoHash[:]))
q.Set("peer_id", string(ar.PeerId[:]))
// AFAICT, port is mandatory, and there's no implied port key.
q.Set("port", fmt.Sprintf("%d", ar.Port))
q.Set("uploaded", strconv.FormatInt(ar.Uploaded, 10))
q.Set("downloaded", strconv.FormatInt(ar.Downloaded, 10))
q.Set("left", strconv.FormatUint(ar.Left, 10))
if ar.Event != None {
q.Set("event", ar.Event.String())
}
// http://stackoverflow.com/questions/17418004/why-does-tracker-server-not-understand-my-request-bittorrent-protocol
q.Set("compact", "1")
// According to https://wiki.vuze.com/w/Message_Stream_Encryption. TODO:
// Take EncryptionPolicy or something like it as a parameter.
q.Set("supportcrypto", "1")
if opts.ClientIp4.IP != nil {
q.Set("ipv4", opts.ClientIp4.String())
}
if opts.ClientIp6.IP != nil {
q.Set("ipv6", opts.ClientIp6.String())
}
_url.RawQuery = q.Encode()
}
func announceHTTP(opt Announce, _url *url.URL) (ret AnnounceResponse, err error) {
_url = httptoo.CopyURL(_url)
setAnnounceParams(_url, &opt.Request, opt)
req, err := http.NewRequest("GET", _url.String(), nil)
req.Header.Set("User-Agent", opt.UserAgent)
req.Host = opt.HostHeader
resp, err := opt.HttpClient.Do(req)
if err != nil {
return
}
defer resp.Body.Close()
var buf bytes.Buffer
io.Copy(&buf, resp.Body)
if resp.StatusCode != 200 {
err = fmt.Errorf("response from tracker: %s: %s", resp.Status, buf.String())
return
}
var trackerResponse httpResponse
err = bencode.Unmarshal(buf.Bytes(), &trackerResponse)
if _, ok := err.(bencode.ErrUnusedTrailingBytes); ok {
err = nil
} else if err != nil {
err = fmt.Errorf("error decoding %q: %s", buf.Bytes(), err)
return
}
if trackerResponse.FailureReason != "" {
err = fmt.Errorf("tracker gave failure reason: %q", trackerResponse.FailureReason)
return
}
vars.Add("successful http announces", 1)
ret.Interval = trackerResponse.Interval
ret.Leechers = trackerResponse.Incomplete
ret.Seeders = trackerResponse.Complete
if len(trackerResponse.Peers) != 0 {
vars.Add("http responses with nonempty peers key", 1)
}
ret.Peers = trackerResponse.Peers
if len(trackerResponse.Peers6) != 0 {
vars.Add("http responses with nonempty peers6 key", 1)
}
for _, na := range trackerResponse.Peers6 {
ret.Peers = append(ret.Peers, Peer{
IP: na.IP,
Port: na.Port,
})
}
return
}

26
vendor/github.com/anacrolix/torrent/tracker/peer.go generated vendored Normal file
View File

@@ -0,0 +1,26 @@
package tracker
import (
"net"
"github.com/anacrolix/dht/krpc"
)
type Peer struct {
IP net.IP
Port int
ID []byte
}
// Set from the non-compact form in BEP 3.
func (p *Peer) fromDictInterface(d map[string]interface{}) {
p.IP = net.ParseIP(d["ip"].(string))
p.ID = []byte(d["peer id"].(string))
p.Port = int(d["port"].(int64))
}
func (p Peer) FromNodeAddr(na krpc.NodeAddr) Peer {
p.IP = na.IP
p.Port = na.Port
return p
}

124
vendor/github.com/anacrolix/torrent/tracker/server.go generated vendored Normal file
View File

@@ -0,0 +1,124 @@
package tracker
import (
"bytes"
"encoding"
"encoding/binary"
"fmt"
"math/rand"
"net"
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/missinggo"
)
type torrent struct {
Leechers int32
Seeders int32
Peers []krpc.NodeAddr
}
type server struct {
pc net.PacketConn
conns map[int64]struct{}
t map[[20]byte]torrent
}
func marshal(parts ...interface{}) (ret []byte, err error) {
var buf bytes.Buffer
for _, p := range parts {
err = binary.Write(&buf, binary.BigEndian, p)
if err != nil {
return
}
}
ret = buf.Bytes()
return
}
func (s *server) respond(addr net.Addr, rh ResponseHeader, parts ...interface{}) (err error) {
b, err := marshal(append([]interface{}{rh}, parts...)...)
if err != nil {
return
}
_, err = s.pc.WriteTo(b, addr)
return
}
func (s *server) newConn() (ret int64) {
ret = rand.Int63()
if s.conns == nil {
s.conns = make(map[int64]struct{})
}
s.conns[ret] = struct{}{}
return
}
func (s *server) serveOne() (err error) {
b := make([]byte, 0x10000)
n, addr, err := s.pc.ReadFrom(b)
if err != nil {
return
}
r := bytes.NewReader(b[:n])
var h RequestHeader
err = readBody(r, &h)
if err != nil {
return
}
switch h.Action {
case ActionConnect:
if h.ConnectionId != connectRequestConnectionId {
return
}
connId := s.newConn()
err = s.respond(addr, ResponseHeader{
ActionConnect,
h.TransactionId,
}, ConnectionResponse{
connId,
})
return
case ActionAnnounce:
if _, ok := s.conns[h.ConnectionId]; !ok {
s.respond(addr, ResponseHeader{
TransactionId: h.TransactionId,
Action: ActionError,
}, []byte("not connected"))
return
}
var ar AnnounceRequest
err = readBody(r, &ar)
if err != nil {
return
}
t := s.t[ar.InfoHash]
bm := func() encoding.BinaryMarshaler {
ip := missinggo.AddrIP(addr)
if ip.To4() != nil {
return krpc.CompactIPv4NodeAddrs(t.Peers)
}
return krpc.CompactIPv6NodeAddrs(t.Peers)
}()
b, err = bm.MarshalBinary()
if err != nil {
panic(err)
}
err = s.respond(addr, ResponseHeader{
TransactionId: h.TransactionId,
Action: ActionAnnounce,
}, AnnounceResponseHeader{
Interval: 900,
Leechers: t.Leechers,
Seeders: t.Seeders,
}, b)
return
default:
err = fmt.Errorf("unhandled action: %d", h.Action)
s.respond(addr, ResponseHeader{
TransactionId: h.TransactionId,
Action: ActionError,
}, []byte("unhandled action"))
return
}
}

81
vendor/github.com/anacrolix/torrent/tracker/tracker.go generated vendored Normal file
View File

@@ -0,0 +1,81 @@
package tracker
import (
"errors"
"net/http"
"net/url"
"github.com/anacrolix/dht/krpc"
)
// Marshalled as binary by the UDP client, so be careful making changes.
type AnnounceRequest struct {
InfoHash [20]byte
PeerId [20]byte
Downloaded int64
Left uint64
Uploaded int64
// Apparently this is optional. None can be used for announces done at
// regular intervals.
Event AnnounceEvent
IPAddress uint32
Key int32
NumWant int32 // How many peer addresses are desired. -1 for default.
Port uint16
} // 82 bytes
type AnnounceResponse struct {
Interval int32 // Minimum seconds the local peer should wait before next announce.
Leechers int32
Seeders int32
Peers []Peer
}
type AnnounceEvent int32
func (e AnnounceEvent) String() string {
// See BEP 3, "event".
return []string{"empty", "completed", "started", "stopped"}[e]
}
const (
None AnnounceEvent = iota
Completed // The local peer just completed the torrent.
Started // The local peer has just resumed this torrent.
Stopped // The local peer is leaving the swarm.
)
var (
ErrBadScheme = errors.New("unknown scheme")
)
type Announce struct {
TrackerUrl string
Request AnnounceRequest
HostHeader string
UserAgent string
HttpClient *http.Client
UdpNetwork string
// If the port is zero, it's assumed to be the same as the Request.Port
ClientIp4 krpc.NodeAddr
// If the port is zero, it's assumed to be the same as the Request.Port
ClientIp6 krpc.NodeAddr
}
// In an FP language with currying, what order what you put these params?
func (me Announce) Do() (res AnnounceResponse, err error) {
_url, err := url.Parse(me.TrackerUrl)
if err != nil {
return
}
switch _url.Scheme {
case "http", "https":
return announceHTTP(me, _url)
case "udp", "udp4", "udp6":
return announceUDP(me, _url)
default:
err = ErrBadScheme
return
}
}

298
vendor/github.com/anacrolix/torrent/tracker/udp.go generated vendored Normal file
View File

@@ -0,0 +1,298 @@
package tracker
import (
"bytes"
"encoding"
"encoding/binary"
"errors"
"fmt"
"io"
"math/rand"
"net"
"net/url"
"time"
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/missinggo/pproffd"
)
type Action int32
const (
ActionConnect Action = iota
ActionAnnounce
ActionScrape
ActionError
connectRequestConnectionId = 0x41727101980
// BEP 41
optionTypeEndOfOptions = 0
optionTypeNOP = 1
optionTypeURLData = 2
)
type ConnectionRequest struct {
ConnectionId int64
Action int32
TransctionId int32
}
type ConnectionResponse struct {
ConnectionId int64
}
type ResponseHeader struct {
Action Action
TransactionId int32
}
type RequestHeader struct {
ConnectionId int64
Action Action
TransactionId int32
} // 16 bytes
type AnnounceResponseHeader struct {
Interval int32
Leechers int32
Seeders int32
}
func newTransactionId() int32 {
return int32(rand.Uint32())
}
func timeout(contiguousTimeouts int) (d time.Duration) {
if contiguousTimeouts > 8 {
contiguousTimeouts = 8
}
d = 15 * time.Second
for ; contiguousTimeouts > 0; contiguousTimeouts-- {
d *= 2
}
return
}
type udpAnnounce struct {
contiguousTimeouts int
connectionIdReceived time.Time
connectionId int64
socket net.Conn
url url.URL
a *Announce
}
func (c *udpAnnounce) Close() error {
if c.socket != nil {
return c.socket.Close()
}
return nil
}
func (c *udpAnnounce) ipv6() bool {
if c.a.UdpNetwork == "udp6" {
return true
}
rip := missinggo.AddrIP(c.socket.RemoteAddr())
return rip.To16() != nil && rip.To4() == nil
}
func (c *udpAnnounce) Do(req AnnounceRequest) (res AnnounceResponse, err error) {
err = c.connect()
if err != nil {
return
}
reqURI := c.url.RequestURI()
if c.ipv6() {
// BEP 15
req.IPAddress = 0
} else if req.IPAddress == 0 && c.a.ClientIp4.IP != nil {
req.IPAddress = binary.BigEndian.Uint32(c.a.ClientIp4.IP.To4())
}
// Clearly this limits the request URI to 255 bytes. BEP 41 supports
// longer but I'm not fussed.
options := append([]byte{optionTypeURLData, byte(len(reqURI))}, []byte(reqURI)...)
b, err := c.request(ActionAnnounce, req, options)
if err != nil {
return
}
var h AnnounceResponseHeader
err = readBody(b, &h)
if err != nil {
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
err = fmt.Errorf("error parsing announce response: %s", err)
return
}
res.Interval = h.Interval
res.Leechers = h.Leechers
res.Seeders = h.Seeders
nas := func() interface {
encoding.BinaryUnmarshaler
NodeAddrs() []krpc.NodeAddr
} {
if c.ipv6() {
return &krpc.CompactIPv6NodeAddrs{}
} else {
return &krpc.CompactIPv4NodeAddrs{}
}
}()
err = nas.UnmarshalBinary(b.Bytes())
if err != nil {
return
}
for _, cp := range nas.NodeAddrs() {
res.Peers = append(res.Peers, Peer{}.FromNodeAddr(cp))
}
return
}
// body is the binary serializable request body. trailer is optional data
// following it, such as for BEP 41.
func (c *udpAnnounce) write(h *RequestHeader, body interface{}, trailer []byte) (err error) {
var buf bytes.Buffer
err = binary.Write(&buf, binary.BigEndian, h)
if err != nil {
panic(err)
}
if body != nil {
err = binary.Write(&buf, binary.BigEndian, body)
if err != nil {
panic(err)
}
}
_, err = buf.Write(trailer)
if err != nil {
return
}
n, err := c.socket.Write(buf.Bytes())
if err != nil {
return
}
if n != buf.Len() {
panic("write should send all or error")
}
return
}
func read(r io.Reader, data interface{}) error {
return binary.Read(r, binary.BigEndian, data)
}
func write(w io.Writer, data interface{}) error {
return binary.Write(w, binary.BigEndian, data)
}
// args is the binary serializable request body. trailer is optional data
// following it, such as for BEP 41.
func (c *udpAnnounce) request(action Action, args interface{}, options []byte) (responseBody *bytes.Buffer, err error) {
tid := newTransactionId()
err = c.write(&RequestHeader{
ConnectionId: c.connectionId,
Action: action,
TransactionId: tid,
}, args, options)
if err != nil {
return
}
c.socket.SetReadDeadline(time.Now().Add(timeout(c.contiguousTimeouts)))
b := make([]byte, 0x800) // 2KiB
for {
var n int
n, err = c.socket.Read(b)
if opE, ok := err.(*net.OpError); ok {
if opE.Timeout() {
c.contiguousTimeouts++
return
}
}
if err != nil {
return
}
buf := bytes.NewBuffer(b[:n])
var h ResponseHeader
err = binary.Read(buf, binary.BigEndian, &h)
switch err {
case io.ErrUnexpectedEOF:
continue
case nil:
default:
return
}
if h.TransactionId != tid {
continue
}
c.contiguousTimeouts = 0
if h.Action == ActionError {
err = errors.New(buf.String())
}
responseBody = buf
return
}
}
func readBody(r io.Reader, data ...interface{}) (err error) {
for _, datum := range data {
err = binary.Read(r, binary.BigEndian, datum)
if err != nil {
break
}
}
return
}
func (c *udpAnnounce) connected() bool {
return !c.connectionIdReceived.IsZero() && time.Now().Before(c.connectionIdReceived.Add(time.Minute))
}
func (c *udpAnnounce) dialNetwork() string {
if c.a.UdpNetwork != "" {
return c.a.UdpNetwork
}
return "udp"
}
func (c *udpAnnounce) connect() (err error) {
if c.connected() {
return nil
}
c.connectionId = connectRequestConnectionId
if c.socket == nil {
hmp := missinggo.SplitHostMaybePort(c.url.Host)
if hmp.NoPort {
hmp.NoPort = false
hmp.Port = 80
}
c.socket, err = net.Dial(c.dialNetwork(), hmp.String())
if err != nil {
return
}
c.socket = pproffd.WrapNetConn(c.socket)
}
b, err := c.request(ActionConnect, nil, nil)
if err != nil {
return
}
var res ConnectionResponse
err = readBody(b, &res)
if err != nil {
return
}
c.connectionId = res.ConnectionId
c.connectionIdReceived = time.Now()
return
}
// TODO: Split on IPv6, as BEP 15 says response peer decoding depends on
// network in use.
func announceUDP(opt Announce, _url *url.URL) (AnnounceResponse, error) {
ua := udpAnnounce{
url: *_url,
a: &opt,
}
defer ua.Close()
return ua.Do(opt.Request)
}

158
vendor/github.com/anacrolix/torrent/tracker_scraper.go generated vendored Normal file
View File

@@ -0,0 +1,158 @@
package torrent
import (
"bytes"
"errors"
"fmt"
"net"
"net/url"
"time"
"github.com/anacrolix/dht/krpc"
"github.com/anacrolix/missinggo"
"github.com/anacrolix/torrent/tracker"
)
// Announces a torrent to a tracker at regular intervals, when peers are
// required.
type trackerScraper struct {
u url.URL
// Causes the trackerScraper to stop running.
stop missinggo.Event
t *Torrent
lastAnnounce trackerAnnounceResult
}
func (ts *trackerScraper) statusLine() string {
var w bytes.Buffer
fmt.Fprintf(&w, "%q\t%s\t%s",
ts.u.String(),
func() string {
na := time.Until(ts.lastAnnounce.Completed.Add(ts.lastAnnounce.Interval))
if na > 0 {
na /= time.Second
na *= time.Second
return na.String()
} else {
return "anytime"
}
}(),
func() string {
if ts.lastAnnounce.Err != nil {
return ts.lastAnnounce.Err.Error()
}
if ts.lastAnnounce.Completed.IsZero() {
return "never"
}
return fmt.Sprintf("%d peers", ts.lastAnnounce.NumPeers)
}(),
)
return w.String()
}
type trackerAnnounceResult struct {
Err error
NumPeers int
Interval time.Duration
Completed time.Time
}
func (me *trackerScraper) getIp() (ip net.IP, err error) {
ips, err := net.LookupIP(me.u.Hostname())
if err != nil {
return
}
if len(ips) == 0 {
err = errors.New("no ips")
return
}
for _, ip = range ips {
if me.t.cl.ipIsBlocked(ip) {
continue
}
switch me.u.Scheme {
case "udp4":
if ip.To4() == nil {
continue
}
case "udp6":
if ip.To4() != nil {
continue
}
}
return
}
err = errors.New("no acceptable ips")
return
}
func (me *trackerScraper) trackerUrl(ip net.IP) string {
u := me.u
if u.Port() != "" {
u.Host = net.JoinHostPort(ip.String(), u.Port())
}
return u.String()
}
// Return how long to wait before trying again. For most errors, we return 5
// minutes, a relatively quick turn around for DNS changes.
func (me *trackerScraper) announce() (ret trackerAnnounceResult) {
defer func() {
ret.Completed = time.Now()
}()
ret.Interval = 5 * time.Minute
ip, err := me.getIp()
if err != nil {
ret.Err = fmt.Errorf("error getting ip: %s", err)
return
}
me.t.cl.lock()
req := me.t.announceRequest()
me.t.cl.unlock()
res, err := tracker.Announce{
HttpClient: me.t.cl.config.TrackerHttpClient,
UserAgent: me.t.cl.config.HTTPUserAgent,
TrackerUrl: me.trackerUrl(ip),
Request: req,
HostHeader: me.u.Host,
UdpNetwork: me.u.Scheme,
ClientIp4: krpc.NodeAddr{IP: me.t.cl.config.PublicIp4},
ClientIp6: krpc.NodeAddr{IP: me.t.cl.config.PublicIp6},
}.Do()
if err != nil {
ret.Err = fmt.Errorf("error announcing: %s", err)
return
}
me.t.AddPeers(Peers(nil).AppendFromTracker(res.Peers))
ret.NumPeers = len(res.Peers)
ret.Interval = time.Duration(res.Interval) * time.Second
return
}
func (me *trackerScraper) Run() {
for {
select {
case <-me.t.closed.LockedChan(me.t.cl.locker()):
return
case <-me.stop.LockedChan(me.t.cl.locker()):
return
case <-me.t.wantPeersEvent.LockedChan(me.t.cl.locker()):
}
ar := me.announce()
me.t.cl.lock()
me.lastAnnounce = ar
me.t.cl.unlock()
intervalChan := time.After(time.Until(ar.Completed.Add(ar.Interval)))
select {
case <-me.t.closed.LockedChan(me.t.cl.locker()):
return
case <-me.stop.LockedChan(me.t.cl.locker()):
return
case <-intervalChan:
}
}
}

18
vendor/github.com/anacrolix/torrent/utp.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
package torrent
import (
"context"
"net"
)
// Abstracts the utp Socket, so the implementation can be selected from
// different packages.
type utpSocket interface {
net.PacketConn
// net.Listener, but we can't have duplicate Close.
Accept() (net.Conn, error)
Addr() net.Addr
// net.Dialer but there's no interface.
DialContext(ctx context.Context, network, addr string) (net.Conn, error)
// Dial(addr string) (net.Conn, error)
}

16
vendor/github.com/anacrolix/torrent/utp_go.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// +build !cgo disable_libutp
package torrent
import (
"github.com/anacrolix/utp"
)
func NewUtpSocket(network, addr string, _ firewallCallback) (utpSocket, error) {
s, err := utp.NewSocket(network, addr)
if s == nil {
return nil, err
} else {
return s, err
}
}

21
vendor/github.com/anacrolix/torrent/utp_libutp.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
// +build cgo,!disable_libutp
package torrent
import (
"github.com/anacrolix/go-libutp"
)
func NewUtpSocket(network, addr string, fc firewallCallback) (utpSocket, error) {
s, err := utp.NewSocket(network, addr)
if s == nil {
return nil, err
}
if err != nil {
return s, err
}
if fc != nil {
s.SetFirewallCallback(utp.FirewallCallback(fc))
}
return s, err
}

56
vendor/github.com/anacrolix/torrent/worst_conns.go generated vendored Normal file
View File

@@ -0,0 +1,56 @@
package torrent
import (
"container/heap"
"fmt"
"unsafe"
)
func worseConn(l, r *connection) bool {
var ml multiLess
ml.NextBool(!l.useful(), !r.useful())
ml.StrictNext(
l.lastHelpful().Equal(r.lastHelpful()),
l.lastHelpful().Before(r.lastHelpful()))
ml.StrictNext(
l.completedHandshake.Equal(r.completedHandshake),
l.completedHandshake.Before(r.completedHandshake))
ml.Next(func() (bool, bool) {
return l.peerPriority() == r.peerPriority(), l.peerPriority() < r.peerPriority()
})
ml.StrictNext(l == r, uintptr(unsafe.Pointer(l)) < uintptr(unsafe.Pointer(r)))
less, ok := ml.FinalOk()
if !ok {
panic(fmt.Sprintf("cannot differentiate %#v and %#v", l, r))
}
return less
}
type worseConnSlice struct {
conns []*connection
}
var _ heap.Interface = &worseConnSlice{}
func (me worseConnSlice) Len() int {
return len(me.conns)
}
func (me worseConnSlice) Less(i, j int) bool {
return worseConn(me.conns[i], me.conns[j])
}
func (me *worseConnSlice) Pop() interface{} {
i := len(me.conns) - 1
ret := me.conns[i]
me.conns = me.conns[:i]
return ret
}
func (me *worseConnSlice) Push(x interface{}) {
me.conns = append(me.conns, x.(*connection))
}
func (me worseConnSlice) Swap(i, j int) {
me.conns[i], me.conns[j] = me.conns[j], me.conns[i]
}