亚洲在线久爱草,狠狠天天香蕉网,天天搞日日干久草,伊人亚洲日本欧美

為了賬號安全,請及時綁定郵箱和手機立即綁定
已解決430363個問題,去搜搜看,總會有你想問的

如何在Python中獲取嵌套的href?

如何在Python中獲取嵌套的href?

叮當貓咪 2023-10-04 14:22:00
目標(我需要重復搜索數百次):1.在“ https://www.ncbi.nlm.nih.gov/ipg/ ”中搜索(例如“WP_000177210.1”)(即https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1)2.選擇表格第二列“CDS Region in Nucleotide”中的第一條記錄(即“NC_011415.1 1997353-1998831 (-)”,https: //www.ncbi.nlm.nih.gov/nuccore/NC_011415.1 ?from=1997353&to=1998831&strand=2 )3.選擇該序列名稱下的“FASTA”4.獲取fasta序列(即“NC_011415.1:c1998831-1997353大腸桿菌SE11,完整序列ATGACTTTATGGATTAACGGTGACTGGATAACGGGCCAGGGCGCATCGCGTGTGAAGCGTAATCCGGTAT CGGGCGAG......”)。代碼1.在“ https://www.ncbi.nlm.nih.gov/ipg/ ”中搜索(例如“WP_000177210.1”)import requestsfrom bs4 import BeautifulSoupurl = "https://www.ncbi.nlm.nih.gov/ipg/"r = requests.get(url, params = "WP_000177210.1")if r.status_code == requests.codes.ok:    soup = BeautifulSoup(r.text,"lxml")2.選擇表第二列“核苷酸中的 CDS 區域”中的第一條記錄(在本例中為“NC_011415.1 1997353-1998831 (-)”)(即https://www.ncbi.nlm.nih.gov /nuccore/NC_011415.1?from=1997353&to=1998831&strand=2 )# try 1 (wrong)## I tried this first, but it seemed like it only accessed to the first level of the href?!for a in soup.find_all('a', href=True):    if (a['href'][:8]) =="/nuccore":        print("Found the URL:", a['href'])# try 2 (not sure how to access nested href)## According to the label I saw in the Develop Tools, I think I need to get the href in the following nested structure. However, it didn't work.soup.select("html div #maincontent div div div #ph-ipg div table tbody tr td a")我現在就卡在這一步了......聚苯乙烯這是我第一次處理html格式。我也是第一次在這里提問。我可能不太清楚地表達這個問題。如果有任何問題,請告訴我。
查看完整描述

1 回答

?
ibeautiful

TA貢獻1993條經驗 獲得超6個贊

不使用 NCBI 的 REST API,


import time

from bs4 import BeautifulSoup

from selenium import webdriver


# Opens a firefox webbrowser for scrapping purposes

browser = webdriver.Firefox(executable_path=r'your\path\geckodriver.exe') # Put your own path here


# Allows you to load a page completely (with all of the JS)

browser.get('https://www.ncbi.nlm.nih.gov/ipg/?term=WP_000177210.1')


# Delay turning the page into a soup in order to collect the newly fetched data

time.sleep(3)


# Creates the soup

soup = BeautifulSoup(browser.page_source, "html")


# Gets all the links by filtering out ones with just '/nuccore' and keeping ones that include '/nuccore'

links = [a['href'] for a in soup.find_all('a', href=True) if '/nuccore' in a['href'] and not a['href'] == '/nuccore']

筆記:

你需要這個包selenium

您需要安裝GeckoDriver


查看完整回答
反對 回復 2023-10-04
  • 1 回答
  • 0 關注
  • 124 瀏覽

添加回答

舉報

0/150
提交
取消
微信客服

購課補貼
聯系客服咨詢優惠詳情

幫助反饋 APP下載

慕課網APP
您的移動學習伙伴

公眾號

掃描二維碼
關注慕課網微信公眾號