亚洲在线久爱草,狠狠天天香蕉网,天天搞日日干久草,伊人亚洲日本欧美

為了賬號安全,請及時綁定郵箱和手機立即綁定
已解決430363個問題,去搜搜看,總會有你想問的

Python 并發 executor.map() 和 submit()

Python 并發 executor.map() 和 submit()

墨色風雨 2023-06-13 10:57:29
我正在學習如何使用 concurrent withexecutor.map()和executor.submit()。我有一個包含 20 個 url 的列表,想同時發送 20 個請求,問題是.submit()從一開始就以與給定列表不同的順序返回結果。我讀過它map()可以滿足我的需要,但我不知道如何用它編寫代碼。下面的代碼對我來說很完美。問題:是否有任何代碼塊map()等同于下面的代碼,或者任何排序方法可以submit()按給定列表的順序對結果列表進行排序?import concurrent.futuresimport urllib.requestURLS = ['http://www.foxnews.com/',        'http://www.cnn.com/',        'http://europe.wsj.com/',        'http://www.bbc.co.uk/',        'http://some-made-up-domain.com/']# Retrieve a single page and report the url and contentsdef load_url(url, timeout):    with urllib.request.urlopen(url, timeout=timeout) as conn:        return conn.read()# We can use a with statement to ensure threads are cleaned up promptlywith concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:    # Start the load operations and mark each future with its URL    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}    for future in concurrent.futures.as_completed(future_to_url):        url = future_to_url[future]        try:            data = future.result()        except Exception as exc:            print('%r generated an exception: %s' % (url, exc))        else:            print('%r page is %d bytes' % (url, len(data)))
查看完整描述

2 回答

?
一只萌萌小番薯

TA貢獻1795條經驗 獲得超7個贊

這是您現有代碼的地圖版本。請注意,回調現在接受一個元組作為參數。我在回調中添加了一個 try\except,因此結果不會拋出錯誤。結果根據輸入列表排序。


from concurrent.futures import ThreadPoolExecutor

import urllib.request


URLS = ['http://www.foxnews.com/',

        'http://www.cnn.com/',

        'http://www.wsj.com/',

        'http://www.bbc.co.uk/',

        'http://some-made-up-domain.com/']


# Retrieve a single page and report the url and contents

def load_url(tt):  # (url,timeout)

    url, timeout = tt

    try:

      with urllib.request.urlopen(url, timeout=timeout) as conn:

         return (url, conn.read())

    except Exception as ex:

        print("Error:", url, ex)

        return(url,"")  # error, return empty string


with ThreadPoolExecutor(max_workers=5) as executor:

    results = executor.map(load_url, [(u,60) for u in URLS])  # pass url and timeout as tuple to callback

    executor.shutdown(wait=True) # wait for all complete

    print("Results:")

for r in results:  # ordered results, will throw exception here if not handled in callback

    print('   %r page is %d bytes' % (r[0], len(r[1])))

輸出


Error: http://www.wsj.com/ HTTP Error 404: Not Found

Results:

   'http://www.foxnews.com/' page is 320028 bytes

   'http://www.cnn.com/' page is 1144916 bytes

   'http://www.wsj.com/' page is 0 bytes

   'http://www.bbc.co.uk/' page is 279418 bytes

   'http://some-made-up-domain.com/' page is 64668 bytes


查看完整回答
反對 回復 2023-06-13
?
幕布斯6054654

TA貢獻1876條經驗 獲得超7個贊

在不使用該方法的情況下,您不僅map可以使用URL 作為值,還可以使用它們在列表中的索引來enumerate構建dict。然后,您可以使用索引作為鍵從調用返回的對象future_to_url構建一個字典,這樣您就可以在字典的長度上迭代索引,以按照與原始列表中相應項目相同的順序讀取字典:futureconcurrent.futures.as_completed(future_to_url)


with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:

    # Start the load operations and mark each future with its URL

    future_to_url = {

        executor.submit(load_url, url, 60): (i, url) for i, url in enumerate(URLS)

    }

    futures = {}

    for future in concurrent.futures.as_completed(future_to_url):

        i, url = future_to_url[future]

        futures[i] = url, future

    for i in range(len(futures)):

        url, future = futures[i]

        try:

            data = future.result()

        except Exception as exc:

            print('%r generated an exception: %s' % (url, exc))

        else:

            print('%r page is %d bytes' % (url, len(data)))


查看完整回答
反對 回復 2023-06-13
  • 2 回答
  • 0 關注
  • 566 瀏覽
慕課專欄
更多

添加回答

舉報

0/150
提交
取消
微信客服

購課補貼
聯系客服咨詢優惠詳情

幫助反饋 APP下載

慕課網APP
您的移動學習伙伴

公眾號

掃描二維碼
關注慕課網微信公眾號