1 回答
TA貢獻1780條經驗 獲得超4個贊
問題似乎是反復調用gzip.NewWriter()infunc(*CursorReader) Read([]byte) (int, error)
您正在gzip.Writer為每個調用分配一個新的Read. gzip壓縮是有狀態的,因此您只能Writer對所有操作使用單個實例。
解決方案#1
解決您的問題的一個相當簡單的方法是讀取游標中的所有行并將其傳遞gzip.Writer并將 gzip 壓縮的內容存儲到內存緩沖區中。
var cursor, _ = collection.Find(context.TODO(), filter)
defer cursor.Close(context.TODO())
// prepare a buffer to hold gzipped data
var buffer bytes.Buffer
var gz = gzip.NewWriter(&buffer)
defer gz.Close()
for cursor.Next(context.TODO()) {
if _, err = io.WriteString(gz, cursor.Current.String()); err != nil {
// handle error somehow ˉ\_(ツ)_/ˉ
}
}
// you can now use buffer as io.Reader
// and it'll contain gzipped data for your serialized rows
_, err = s3.Upload(&s3.UploadInput{
Bucket: aws.String("..."),
Key: aws.String("...")),
Body: &buffer,
})
解決方案#2
另一種解決方案是使用goroutines創建一個流,按需讀取和壓縮數據,而不是在內存緩沖區中io.Pipe()。如果您正在讀取的數據非常大并且您無法將所有數據都保存在內存中,這將非常有用。
var cursor, _ = collection.Find(context.TODO(), filter)
defer cursor.Close(context.TODO())
// create pipe endpoints
reader, writer := io.Pipe()
// note: io.Pipe() returns a synchronous in-memory pipe
// reads and writes block on one another
// make sure to go through docs once.
// now, since reads and writes on a pipe blocks
// we must move to a background goroutine else
// all our writes would block forever
go func() {
// order of defer here is important
// see: https://stackoverflow.com/a/24720120/6611700
// make sure gzip stream is closed before the pipe
// to ensure data is flushed properly
defer writer.Close()
var gz = gzip.NewWriter(writer)
defer gz.Close()
for cursor.Next(context.Background()) {
if _, err = io.WriteString(gz, cursor.Current.String()); err != nil {
// handle error somehow ˉ\_(ツ)_/ˉ
}
}
}()
// you can now use reader as io.Reader
// and it'll contain gzipped data for your serialized rows
_, err = s3.Upload(&s3.UploadInput{
Bucket: aws.String("..."),
Key: aws.String("...")),
Body: reader,
})
- 1 回答
- 0 關注
- 186 瀏覽
添加回答
舉報
