微博数据采集,微博爬虫,微博网页解析,完整代码(主体内容+评论内容)

这篇具有很好参考价值的文章主要介绍了微博数据采集,微博爬虫,微博网页解析,完整代码(主体内容+评论内容)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

2023年12月3号更新版 修复问题

参加新闻比赛,需要获取大众对某一方面的态度信息,因此选择微博作为信息收集的一部分

完整代码

微博主体内容

import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json


# 设置为自己的cookies
cookies = {
    'SINAGLOBAL': '1278126679099.0298.1694199077980',
    'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7k7YlpnahLGVhB90-mk0xFNznyCVsjyu9-7-Hk0jRULM.',
    'SUB': '_2A25IaC_CDeRhGeFO61AY8i_NwzyIHXVrBC0KrDV8PUNbmtAGLVLckW9NQYCXlpjzhYwtC8sDM7giaMcMNIlWSlP6',
    'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KzhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
    'ALF': '1733137172',
    '_s_tentry': 'weibo.com',
    'Apache': '435019984104.0236.1701606621998',
    'ULV': '1701606622040:13:2:2:435019984104.0236.1701606621998:1701601199048',
    }



def get_the_list_response(q='话题', n='1', p='页码'):
    headers = {
        'authority': 's.weibo.com',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
        'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
        'referer': 'https://s.weibo.com/weibo?q=%23%E6%96%B0%E9%97%BB%E5%AD%A6%E6%95%99%E6%8E%88%E6%80%92%E6%80%BC%E5%BC%A0%E9%9B%AA%E5%B3%B0%23&nodup=1',
        'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'sec-fetch-dest': 'document',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-user': '?1',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
    }
    
    params = {
        'q': q,
        'nodup': n,
        'page': p,
    }
    response = requests.get('https://s.weibo.com/weibo', params=params, cookies=cookies, headers=headers)
    return response

def parse_the_list(text):
    soup = BeautifulSoup(text)
    divs = soup.select('div[action-type="feed_list_item"]')
    lst = []
    for div in divs:
        mid = div.get('mid')
        time = div.select('div.card-feed > div.content > div.from > a:first-of-type')
        if time:
            time = time[0].string.strip()
        else:
            time = None
        p = div.select('div.card-feed > div.content > p:last-of-type')
        if p:
            p = p[0].strings
            content = '\n'.join([para.replace('\u200b', '').strip() for para in list(p)]).strip()
        else:
            content = None
        star = div.select('ul > li > a > button > span.woo-like-count')
        if star:
            star = list(star[0].strings)[0]
        else:
            star = None
        lst.append((mid, content, star, time))
    df = pd.DataFrame(lst, columns=['mid', 'content', 'star', 'time'])
    return df

def get_the_list(q, p):
    df_list = []
    for i in range(1, p+1):
        response = get_the_list_response(q=q, p=i)
        if response.status_code == 200:
            df = parse_the_list(response.text)
            df_list.append(df)
            print(f'第{i}页解析成功!', flush=True)
            
    return df_list
    
if __name__ == '__main__':
    # 先设置cookie,换成自己的;
    q = '#华为发布会#'
    p = 20
    df_list = get_the_list(q, p)
    df = pd.concat(df_list)
    df.to_csv(f'{q}.csv', index=False)

微博评论内容

一级评论内容
import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json


# 设置为自己的cookies
cookies = {
   'SINAGLOBAL': '1278126679099.0298.1694199077980',
   'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7k7YlpnahLGVhB90-mk0xFNznyCVsjyu9-7-Hk0jRULM.',
   'SUB': '_2A25IaC_CDeRhGeFO61AY8i_NwzyIHXVrBC0KrDV8PUNbmtAGLVLckW9NQYCXlpjzhYwtC8sDM7giaMcMNIlWSlP6',
   'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KzhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
   'ALF': '1733137172',
   '_s_tentry': 'weibo.com',
   'Apache': '435019984104.0236.1701606621998',
   'ULV': '1701606622040:13:2:2:435019984104.0236.1701606621998:1701601199048',
   }

# 开始页码,不用修改
page_num = 0

def get_content_1(uid, mid, the_first=True, max_id=None):
   headers = {
      'authority': 'weibo.com',
      'accept': 'application/json, text/plain, */*',
      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
      'client-version': 'v2.43.30',
      'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
      'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
      'sec-ch-ua-mobile': '?0',
      'sec-ch-ua-platform': '"Windows"',
      'sec-fetch-dest': 'empty',
      'sec-fetch-mode': 'cors',
      'sec-fetch-site': 'same-origin',
      'server-version': 'v2023.09.08.4',
      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
      'x-requested-with': 'XMLHttpRequest',
      'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
   }
   
   params = {
      'is_reload': '1',
      'id': f'{mid}',
      'is_show_bulletin': '2',
      'is_mix': '0',
      'count': '20',
      'uid': f'{uid}',
      'fetch_level': '0',
      'locale': 'zh-CN',
   }
   
   if not the_first:
      params['flow'] = 0
      params['max_id'] = max_id
   else:
      pass
   response = requests.get('https://weibo.com/ajax/statuses/buildComments', params=params, cookies=cookies, headers=headers)
   return response


def get_content_2(get_content_1_url):
   headers = {
      'authority': 'weibo.com',
      'accept': '*/*',
      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
      'content-type': 'multipart/form-data; boundary=----WebKitFormBoundaryNs1Toe4Mbr8n1qXm',
      'origin': 'https://weibo.com',
      'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
      'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
      'sec-ch-ua-mobile': '?0',
      'sec-ch-ua-platform': '"Windows"',
      'sec-fetch-dest': 'empty',
      'sec-fetch-mode': 'cors',
      'sec-fetch-site': 'same-origin',
      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
      'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
   }
   
   s = '{"name":"https://weibo.com/ajax/statuses/buildComments?flow=0&is_reload=1&id=4944997453660231&is_show_bulletin=2&is_mix=0&max_id=139282732792325&count=20&uid=1762257041&fetch_level=0&locale=zh-CN","entryType":"resource","startTime":20639.80000001192,"duration":563,"initiatorType":"xmlhttprequest","nextHopProtocol":"h2","renderBlockingStatus":"non-blocking","workerStart":0,"redirectStart":0,"redirectEnd":0,"fetchStart":20639.80000001192,"domainLookupStart":20639.80000001192,"domainLookupEnd":20639.80000001192,"connectStart":20639.80000001192,"secureConnectionStart":20639.80000001192,"connectEnd":20639.80000001192,"requestStart":20641.600000023842,"responseStart":21198.600000023842,"firstInterimResponseStart":0,"responseEnd":21202.80000001192,"transferSize":7374,"encodedBodySize":7074,"decodedBodySize":42581,"responseStatus":200,"serverTiming":[],"dns":0,"tcp":0,"ttfb":557,"pathname":"https://weibo.com/ajax/statuses/buildComments","speed":0}'
   s = json.loads(s)
   s['name'] = get_content_1_url
   s = json.dumps(s)
   data = f'------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="entry"\r\n\r\n{s}\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="request_id"\r\n\r\n\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm--\r\n'
   response = requests.post('https://weibo.com/ajax/log/rum', cookies=cookies, headers=headers, data=data)
   return response.text

def get_once_data(uid, mid, the_first=True, max_id=None):

   respones_1 = get_content_1(uid, mid, the_first, max_id)
   url = respones_1.url
   response_2 = get_content_2(url)
   df = pd.DataFrame(respones_1.json()['data'])
   max_id = respones_1.json()['max_id']
   return max_id, df


if __name__ == '__main__':
   # 先在上面设置cookies
   # 设置好了再进行操作
   
   # 自定义
   name = '#邹振东诚邀张雪峰来厦门请你吃沙茶面#'
   uid = '2610806555'
   mid = '4914095331742409'
   page = 100
   
   # 初始化
   df_list = []
   max_id = ''
   
   for i in range(page):
      if i == 0:
          max_id, df = get_once_data(uid=uid, mid=mid)
      else:
          max_id, df = get_once_data(uid=uid, mid=mid, the_first=False, max_id=max_id)
      if df.shape[0] == 0 or max_id == 0:
          break
      else:
          df_list.append(df)
          print(f'第{i}页解析完毕!max_id:{max_id}')
   
   df = pd.concat(df_list).astype(str).drop_duplicates()
   df.to_csv(f'{name}.csv', index=False)
二级评论内容
import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json

page_num = 0

cookies = {
  'SINAGLOBAL': '1278126679099.0298.1694199077980',
  'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KMhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
  'XSRF-TOKEN': '47NC7wE7TMhcqfh1K-4bacK-',
  'ALF': '1697384140',
  'SSOLoginState': '1694792141',
  'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7IJXuI95RLbWORIsozuK4Ohxs_boeOIedEcczDT3uSAI.',
  'SUB': '_2A25IAAmdDeRhGeFO61AY8i_NwzyIHXVrdHxVrDV8PUNbmtAGLU74kW9NQYCXlmPtQ1DG4kl_wLzqQqkPl_Do1sZu',
  '_s_tentry': 'weibo.com',
  'Apache': '3760261250067.669.1694792155706',
  'ULV': '1694792155740:8:8:4:3760261250067.669.1694792155706:1694767801057',
  'WBPSESS': 'X5DJqu8gKpwqYSp80b4XokKvi4u4_oikBqVmvlBCHvGwXMxtKAFxIPg-LIF7foS715Sa4NttSYqzj5x2Ms5ynKVOM5I_Fsy9GECAYh38R4DQ-gq7M5XOe4y1gOUqvm1hOK60dUKvrA5hLuONCL2ing==',
}


def get_content_1(uid, mid, the_first=True, max_id=None):
   headers = {
   'authority': 'weibo.com',
   'accept': 'application/json, text/plain, */*',
   'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
   'client-version': 'v2.43.32',
   'referer': 'https://weibo.com/1887344341/NhAosFSL4',
   'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
   'sec-ch-ua-mobile': '?0',
   'sec-ch-ua-platform': '"Windows"',
   'sec-fetch-dest': 'empty',
   'sec-fetch-mode': 'cors',
   'sec-fetch-site': 'same-origin',
   'server-version': 'v2023.09.14.1',
   'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
   'x-requested-with': 'XMLHttpRequest',
   'x-xsrf-token': '-UX-uyKz0jmzbTnlkyDEMvSO',
   }
   params = {
   'is_reload': '1',
   'id': f'{mid}',
   'is_show_bulletin': '2',
   'is_mix': '1',
   'fetch_level': '1',
   'max_id': '0',
   'count': '20',
   'uid': f'{uid}',
   'locale': 'zh-CN',
   }
   
   if not the_first:
     params['flow'] = 0
     params['max_id'] = max_id
   else:
     pass
   response = requests.get('https://weibo.com/ajax/statuses/buildComments', params=params, cookies=cookies, headers=headers)
   return response


def get_content_2(get_content_1_url):
   headers = {
     'authority': 'weibo.com',
     'accept': '*/*',
     'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
     'content-type': 'multipart/form-data; boundary=----WebKitFormBoundaryNs1Toe4Mbr8n1qXm',
     'origin': 'https://weibo.com',
     'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
     'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
     'sec-ch-ua-mobile': '?0',
     'sec-ch-ua-platform': '"Windows"',
     'sec-fetch-dest': 'empty',
     'sec-fetch-mode': 'cors',
     'sec-fetch-site': 'same-origin',
     'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
     'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
   }
   
   s = '{"name":"https://weibo.com/ajax/statuses/buildComments?flow=0&is_reload=1&id=4944997453660231&is_show_bulletin=2&is_mix=0&max_id=139282732792325&count=20&uid=1762257041&fetch_level=0&locale=zh-CN","entryType":"resource","startTime":20639.80000001192,"duration":563,"initiatorType":"xmlhttprequest","nextHopProtocol":"h2","renderBlockingStatus":"non-blocking","workerStart":0,"redirectStart":0,"redirectEnd":0,"fetchStart":20639.80000001192,"domainLookupStart":20639.80000001192,"domainLookupEnd":20639.80000001192,"connectStart":20639.80000001192,"secureConnectionStart":20639.80000001192,"connectEnd":20639.80000001192,"requestStart":20641.600000023842,"responseStart":21198.600000023842,"firstInterimResponseStart":0,"responseEnd":21202.80000001192,"transferSize":7374,"encodedBodySize":7074,"decodedBodySize":42581,"responseStatus":200,"serverTiming":[],"dns":0,"tcp":0,"ttfb":557,"pathname":"https://weibo.com/ajax/statuses/buildComments","speed":0}'
   s = json.loads(s)
   s['name'] = get_content_1_url
   s = json.dumps(s)
   data = f'------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="entry"\r\n\r\n{s}\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="request_id"\r\n\r\n\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm--\r\n'
   response = requests.post('https://weibo.com/ajax/log/rum', cookies=cookies, headers=headers, data=data)
   return response.text

def get_once_data(uid, mid, the_first=True, max_id=None):
   
   respones_1 = get_content_1(uid, mid, the_first, max_id)
   url = respones_1.url
   response_2 = get_content_2(url)
   df = pd.DataFrame(respones_1.json()['data'])
   max_id = respones_1.json()['max_id']
   return max_id, df

if __name__ == '__main__':
   # 更新cookies
   
   # 得到的一级评论信息
   df = pd.read_csv('#邹振东诚邀张雪峰来厦门请你吃沙茶面#.csv')
   
   
   # 过滤没有二级评论的一级评论
   df = df[df['floor_number']>0]
   
   os.makedirs('./二级评论数据/', exist_ok=True)
   for i in range(df.shape[0]):
   
      uid = df.iloc[i]['analysis_extra'].replace('|mid:',':').split(':')[1]
      mid = df.iloc[i]['mid']
      page = 100
      
      if not os.path.exists(f'./二级评论数据/{mid}-{uid}.csv'):
          print(f'不存在 ./二级评论数据/{mid}-{uid}.csv')
          df_list = []
          max_id_set = set()
          max_id = ''
   
          
          for j in range(page):
              if max_id in max_id_set:
                  break
              else:
                  max_id_set.add(max_id)
              if j == 0:
                  max_id, df_ = get_once_data(uid=uid, mid=mid)
              else:
                  max_id, df_ = get_once_data(uid=uid, mid=mid, the_first=False, max_id=max_id)
              if df_.shape[0] == 0 or max_id == 0:
                  break
              else:
                  df_list.append(df_)
                  print(f'{mid}{j}页解析完毕!max_id:{max_id}')
          if df_list:
              outdf = pd.concat(df_list).astype(str).drop_duplicates()
              print(f'文件长度为{outdf.shape[0]},文件保存为 ./二级评论数据/{mid}-{uid}.csv')
              outdf.to_csv(f'./二级评论数据/{mid}-{uid}.csv', index=False)
          else:
              pass
      else:
          print(f'存在 ./二级评论数据/{mid}-{uid}.csv')

微博主体内容获取流程

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


以华为发布会这一热搜为例子,我们可以通过开发者模式得到信息基本都包含在下面的 div tag中


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


我们通过网络这一模块进行解析,发现信息基本都存储在 %23 开头的请求之中,接下来分析一下响应内容


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


这里可以看出响应内容为 html 格式,因此我们可以用xpath或者css来进行解析,这里我们使用BeautifulSoup来解析,解析代码如下:

soup = BeautifulSoup(response.text, 'lxml')
divs = soup.select('div[action-type="feed_list_item"]')
lst = []
for div in divs:
    mid = div.get('mid')
    uid = div.select('div.card-feed > div.avator > a')
    if uid:
        uid = uid[0].get('href').replace('.com/', '?').split('?')[1]
    else:
        uid = None
    time = div.select('div.card-feed > div.content > div.from > a:first-of-type')
    if time:
        time = time[0].string.strip()
    else:
        time = None
    p = div.select('div.card-feed > div.content > p:last-of-type')
    if p:
        p = p[0].strings
        content = '\n'.join([para.replace('\u200b', '').strip() for para in list(p)]).strip()
    else:
        content = None
    star = div.select('ul > li > a > button > span.woo-like-count')
    if star:
        star = list(star[0].strings)[0]
    else:
        star = None
    lst.append((mid, uid, content, star, time))
pd.DataFrame(lst, columns=['mid', 'uid', 'content', 'star', 'time'])

我们可以获得如下结果:

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博

这里的 miduid 两个参数是为了下一节获取微博评论内容需要用到的参数,这里不多解释,如果不需要删除就好,接下来我们看一下请求内容。在开始之前,为了对请求解析方便,在这里我们点击一下 查看全部搜索结果

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
可以发现一个以 weibo 开头的新的请求,和 %23 开头的请求内容类似,但是带了参数 qnodup ,再翻页之后我们可以得到 page 这一个参数
微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
我的解析如下:

1. q:话题
2. nudup:是否展示完整内容
3. page:页码

然后可以对这个请求进行模拟,写入 python 代码中,结合之前的解析,发现内容获取 成功!

完整代码如下:

import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json


# 设置为自己的cookies
cookies = {
    'SINAGLOBAL': '1278126679099.0298.1694199077980',
    'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7k7YlpnahLGVhB90-mk0xFNznyCVsjyu9-7-Hk0jRULM.',
    'SUB': '_2A25IaC_CDeRhGeFO61AY8i_NwzyIHXVrBC0KrDV8PUNbmtAGLVLckW9NQYCXlpjzhYwtC8sDM7giaMcMNIlWSlP6',
    'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KzhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
    'ALF': '1733137172',
    '_s_tentry': 'weibo.com',
    'Apache': '435019984104.0236.1701606621998',
    'ULV': '1701606622040:13:2:2:435019984104.0236.1701606621998:1701601199048',
    }



def get_the_list_response(q='话题', n='1', p='页码'):
    headers = {
        'authority': 's.weibo.com',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
        'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
        'referer': 'https://s.weibo.com/weibo?q=%23%E6%96%B0%E9%97%BB%E5%AD%A6%E6%95%99%E6%8E%88%E6%80%92%E6%80%BC%E5%BC%A0%E9%9B%AA%E5%B3%B0%23&nodup=1',
        'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'sec-fetch-dest': 'document',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-user': '?1',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
    }
    
    params = {
        'q': q,
        'nodup': n,
        'page': p,
    }
    response = requests.get('https://s.weibo.com/weibo', params=params, cookies=cookies, headers=headers)
    return response

def parse_the_list(text):
    soup = BeautifulSoup(text)
    divs = soup.select('div[action-type="feed_list_item"]')
    lst = []
    for div in divs:
        mid = div.get('mid')
        time = div.select('div.card-feed > div.content > div.from > a:first-of-type')
        if time:
            time = time[0].string.strip()
        else:
            time = None
        p = div.select('div.card-feed > div.content > p:last-of-type')
        if p:
            p = p[0].strings
            content = '\n'.join([para.replace('\u200b', '').strip() for para in list(p)]).strip()
        else:
            content = None
        star = div.select('ul > li > a > button > span.woo-like-count')
        if star:
            star = list(star[0].strings)[0]
        else:
            star = None
        lst.append((mid, content, star, time))
    df = pd.DataFrame(lst, columns=['mid', 'content', 'star', 'time'])
    return df

def get_the_list(q, p):
    df_list = []
    for i in range(1, p+1):
        response = get_the_list_response(q=q, p=i)
        if response.status_code == 200:
            df = parse_the_list(response.text)
            df_list.append(df)
            print(f'第{i}页解析成功!', flush=True)
            
    return df_list
    
if __name__ == '__main__':
    # 先设置cookie,换成自己的;
    q = '#华为发布会#'
    p = 20
    df_list = get_the_list(q, p)
    df = pd.concat(df_list)
    df.to_csv(f'{q}.csv', index=False)

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博

微博评论内容获取流程

一级评论内容

上一节内容获取了微博主题内容,可以发现并没有什么难点,本来我以为都结束了,队长偏要评论内容,无奈我只好继续解析评论内容,接下来我们来获取微博评论内容,有一点点绕。

首先我们点开评论数较多的微博, 然后点击 后面还有552条评论,点击查看


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


看到 < div class=“vue-recycle-scroller__item-wrapper” > 这个内容是我们想要的


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


和上一节一样来查找请求, 发现 buildComments?is_reload=1&id= 这个请求包含了我们想要的信息,而且预览内容为 json 格式,省去了解析 html 的步骤,接下来只需要解析请求就ok了。


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


话不多说,往下滑动,多获得几个请求,对得到的请求,分析如下:


微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博


每次往下滑动都会出现两个请求,一个是 buildComments?flow=0&is_reload=1&id=49451497063731… ,一个是 rum 。同时 buildComments?flow=0&is_reload=1&id=49451497063731… 请求的参数发生了变化,第一次请求里面没有 flowmax_id 这两个参数,经过我一下午分析可以得到以下结果:

1. flow:判断是否第一次请求,第一次请求不能加
2. id:微博主体内容的id 上一节获取的mid
3. count:评论数
4. uid:微博主体内容的用户id 上一节获取的uid
5. max_id:上一次请求后最后一个评论的mid,第一次请求不能加
6. 其他参数保持不变
7. rum在buildComments之后验证请求是否人为发出,反爬机制
8. rum的参数围绕buildComments展开
9. rum构造完全凑巧,部分参数对结果无效,能用就行!

完整代码如下:

import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json


# 设置为自己的cookies
cookies = {
   'SINAGLOBAL': '1278126679099.0298.1694199077980',
   'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7k7YlpnahLGVhB90-mk0xFNznyCVsjyu9-7-Hk0jRULM.',
   'SUB': '_2A25IaC_CDeRhGeFO61AY8i_NwzyIHXVrBC0KrDV8PUNbmtAGLVLckW9NQYCXlpjzhYwtC8sDM7giaMcMNIlWSlP6',
   'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KzhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
   'ALF': '1733137172',
   '_s_tentry': 'weibo.com',
   'Apache': '435019984104.0236.1701606621998',
   'ULV': '1701606622040:13:2:2:435019984104.0236.1701606621998:1701601199048',
   }

# 开始页码,不用修改
page_num = 0

def get_content_1(uid, mid, the_first=True, max_id=None):
   headers = {
      'authority': 'weibo.com',
      'accept': 'application/json, text/plain, */*',
      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
      'client-version': 'v2.43.30',
      'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
      'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
      'sec-ch-ua-mobile': '?0',
      'sec-ch-ua-platform': '"Windows"',
      'sec-fetch-dest': 'empty',
      'sec-fetch-mode': 'cors',
      'sec-fetch-site': 'same-origin',
      'server-version': 'v2023.09.08.4',
      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
      'x-requested-with': 'XMLHttpRequest',
      'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
   }
   
   params = {
      'is_reload': '1',
      'id': f'{mid}',
      'is_show_bulletin': '2',
      'is_mix': '0',
      'count': '20',
      'uid': f'{uid}',
      'fetch_level': '0',
      'locale': 'zh-CN',
   }
   
   if not the_first:
      params['flow'] = 0
      params['max_id'] = max_id
   else:
      pass
   response = requests.get('https://weibo.com/ajax/statuses/buildComments', params=params, cookies=cookies, headers=headers)
   return response


def get_content_2(get_content_1_url):
   headers = {
      'authority': 'weibo.com',
      'accept': '*/*',
      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
      'content-type': 'multipart/form-data; boundary=----WebKitFormBoundaryNs1Toe4Mbr8n1qXm',
      'origin': 'https://weibo.com',
      'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
      'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
      'sec-ch-ua-mobile': '?0',
      'sec-ch-ua-platform': '"Windows"',
      'sec-fetch-dest': 'empty',
      'sec-fetch-mode': 'cors',
      'sec-fetch-site': 'same-origin',
      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
      'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
   }
   
   s = '{"name":"https://weibo.com/ajax/statuses/buildComments?flow=0&is_reload=1&id=4944997453660231&is_show_bulletin=2&is_mix=0&max_id=139282732792325&count=20&uid=1762257041&fetch_level=0&locale=zh-CN","entryType":"resource","startTime":20639.80000001192,"duration":563,"initiatorType":"xmlhttprequest","nextHopProtocol":"h2","renderBlockingStatus":"non-blocking","workerStart":0,"redirectStart":0,"redirectEnd":0,"fetchStart":20639.80000001192,"domainLookupStart":20639.80000001192,"domainLookupEnd":20639.80000001192,"connectStart":20639.80000001192,"secureConnectionStart":20639.80000001192,"connectEnd":20639.80000001192,"requestStart":20641.600000023842,"responseStart":21198.600000023842,"firstInterimResponseStart":0,"responseEnd":21202.80000001192,"transferSize":7374,"encodedBodySize":7074,"decodedBodySize":42581,"responseStatus":200,"serverTiming":[],"dns":0,"tcp":0,"ttfb":557,"pathname":"https://weibo.com/ajax/statuses/buildComments","speed":0}'
   s = json.loads(s)
   s['name'] = get_content_1_url
   s = json.dumps(s)
   data = f'------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="entry"\r\n\r\n{s}\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="request_id"\r\n\r\n\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm--\r\n'
   response = requests.post('https://weibo.com/ajax/log/rum', cookies=cookies, headers=headers, data=data)
   return response.text

def get_once_data(uid, mid, the_first=True, max_id=None):

   respones_1 = get_content_1(uid, mid, the_first, max_id)
   url = respones_1.url
   response_2 = get_content_2(url)
   df = pd.DataFrame(respones_1.json()['data'])
   max_id = respones_1.json()['max_id']
   return max_id, df


if __name__ == '__main__':
   # 先在上面设置cookies
   # 设置好了再进行操作
   
   # 自定义
   name = '#邹振东诚邀张雪峰来厦门请你吃沙茶面#'
   uid = '2610806555'
   mid = '4914095331742409'
   page = 100
   
   # 初始化
   df_list = []
   max_id = ''
   
   for i in range(page):
      if i == 0:
          max_id, df = get_once_data(uid=uid, mid=mid)
      else:
          max_id, df = get_once_data(uid=uid, mid=mid, the_first=False, max_id=max_id)
      if df.shape[0] == 0 or max_id == 0:
          break
      else:
          df_list.append(df)
          print(f'第{i}页解析完毕!max_id:{max_id}')
   
   df = pd.concat(df_list).astype(str).drop_duplicates()
   df.to_csv(f'{name}.csv', index=False)

微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
结束!

二级评论内容

二级评论的流程和一级评论一样,不同的是参数
一级评论的参数

params = {
    'is_reload': '1',
    'id': f'{mid}',
    'is_show_bulletin': '2',
    'is_mix': '0',
    'count': '20',
    'uid': f'{uid}',
    'fetch_level': '0',
    'locale': 'zh-CN',
}

二级评论的参数

params = {
    'is_reload': '1',
    'id': f'{mid}',
    'is_show_bulletin': '2',
    'is_mix': '1',
    'fetch_level': '1',
    'max_id': '0',
    'count': '20',
    'uid': f'{uid}',
    'locale': 'zh-CN',
}

二级评论参数的uid指的是微博主体内容的作者uid,而mid指的是评论者的mid

完整代码如下:

import requests
import os
from bs4 import BeautifulSoup
import pandas as pd
import json

page_num = 0

cookies = {
   'SINAGLOBAL': '1278126679099.0298.1694199077980',
   'SUBP': '0033WrSXqPxfM725Ws9jqgMF55529P9D9W5mzQcPEhHvorRG-l7.BSsy5JpX5KMhUgL.FoM7ehz4eo2p1h52dJLoI0qLxK-LBKBLBKMLxKnL1--L1heLxKnL1-qLBo.LxK-L1KeL1KzLxK-L1KeL1KzLxK-L1KeL1Kzt',
   'XSRF-TOKEN': '47NC7wE7TMhcqfh1K-4bacK-',
   'ALF': '1697384140',
   'SSOLoginState': '1694792141',
   'SCF': 'ApDYB6ZQHU_wHU8ItPHSso29Xu0ZRSkOOiFTBeXETNm7IJXuI95RLbWORIsozuK4Ohxs_boeOIedEcczDT3uSAI.',
   'SUB': '_2A25IAAmdDeRhGeFO61AY8i_NwzyIHXVrdHxVrDV8PUNbmtAGLU74kW9NQYCXlmPtQ1DG4kl_wLzqQqkPl_Do1sZu',
   '_s_tentry': 'weibo.com',
   'Apache': '3760261250067.669.1694792155706',
   'ULV': '1694792155740:8:8:4:3760261250067.669.1694792155706:1694767801057',
   'WBPSESS': 'X5DJqu8gKpwqYSp80b4XokKvi4u4_oikBqVmvlBCHvGwXMxtKAFxIPg-LIF7foS715Sa4NttSYqzj5x2Ms5ynKVOM5I_Fsy9GECAYh38R4DQ-gq7M5XOe4y1gOUqvm1hOK60dUKvrA5hLuONCL2ing==',
}


def get_content_1(uid, mid, the_first=True, max_id=None):
    headers = {
    'authority': 'weibo.com',
    'accept': 'application/json, text/plain, */*',
    'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
    'client-version': 'v2.43.32',
    'referer': 'https://weibo.com/1887344341/NhAosFSL4',
    'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
    'sec-ch-ua-mobile': '?0',
    'sec-ch-ua-platform': '"Windows"',
    'sec-fetch-dest': 'empty',
    'sec-fetch-mode': 'cors',
    'sec-fetch-site': 'same-origin',
    'server-version': 'v2023.09.14.1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
    'x-requested-with': 'XMLHttpRequest',
    'x-xsrf-token': '-UX-uyKz0jmzbTnlkyDEMvSO',
    }
    params = {
    'is_reload': '1',
    'id': f'{mid}',
    'is_show_bulletin': '2',
    'is_mix': '1',
    'fetch_level': '1',
    'max_id': '0',
    'count': '20',
    'uid': f'{uid}',
    'locale': 'zh-CN',
    }
    
    if not the_first:
      params['flow'] = 0
      params['max_id'] = max_id
    else:
      pass
    response = requests.get('https://weibo.com/ajax/statuses/buildComments', params=params, cookies=cookies, headers=headers)
    return response


def get_content_2(get_content_1_url):
    headers = {
      'authority': 'weibo.com',
      'accept': '*/*',
      'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
      'content-type': 'multipart/form-data; boundary=----WebKitFormBoundaryNs1Toe4Mbr8n1qXm',
      'origin': 'https://weibo.com',
      'referer': 'https://weibo.com/1762257041/NiSAxfmbZ',
      'sec-ch-ua': '"Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"',
      'sec-ch-ua-mobile': '?0',
      'sec-ch-ua-platform': '"Windows"',
      'sec-fetch-dest': 'empty',
      'sec-fetch-mode': 'cors',
      'sec-fetch-site': 'same-origin',
      'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.69',
      'x-xsrf-token': 'F2EEQZrINBfzB2HPPxqTMQJ_',
    }
    
    s = '{"name":"https://weibo.com/ajax/statuses/buildComments?flow=0&is_reload=1&id=4944997453660231&is_show_bulletin=2&is_mix=0&max_id=139282732792325&count=20&uid=1762257041&fetch_level=0&locale=zh-CN","entryType":"resource","startTime":20639.80000001192,"duration":563,"initiatorType":"xmlhttprequest","nextHopProtocol":"h2","renderBlockingStatus":"non-blocking","workerStart":0,"redirectStart":0,"redirectEnd":0,"fetchStart":20639.80000001192,"domainLookupStart":20639.80000001192,"domainLookupEnd":20639.80000001192,"connectStart":20639.80000001192,"secureConnectionStart":20639.80000001192,"connectEnd":20639.80000001192,"requestStart":20641.600000023842,"responseStart":21198.600000023842,"firstInterimResponseStart":0,"responseEnd":21202.80000001192,"transferSize":7374,"encodedBodySize":7074,"decodedBodySize":42581,"responseStatus":200,"serverTiming":[],"dns":0,"tcp":0,"ttfb":557,"pathname":"https://weibo.com/ajax/statuses/buildComments","speed":0}'
    s = json.loads(s)
    s['name'] = get_content_1_url
    s = json.dumps(s)
    data = f'------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="entry"\r\n\r\n{s}\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm\r\nContent-Disposition: form-data; name="request_id"\r\n\r\n\r\n------WebKitFormBoundaryNs1Toe4Mbr8n1qXm--\r\n'
    response = requests.post('https://weibo.com/ajax/log/rum', cookies=cookies, headers=headers, data=data)
    return response.text

def get_once_data(uid, mid, the_first=True, max_id=None):
    
    respones_1 = get_content_1(uid, mid, the_first, max_id)
    url = respones_1.url
    response_2 = get_content_2(url)
    df = pd.DataFrame(respones_1.json()['data'])
    max_id = respones_1.json()['max_id']
    return max_id, df

if __name__ == '__main__':
    # 更新cookies
    
    # 得到的一级评论信息
    df = pd.read_csv('#邹振东诚邀张雪峰来厦门请你吃沙茶面#.csv')
    
    
    # 过滤没有二级评论的一级评论
    df = df[df['floor_number']>0]
    
    os.makedirs('./二级评论数据/', exist_ok=True)
    for i in range(df.shape[0]):
    
       uid = df.iloc[i]['analysis_extra'].replace('|mid:',':').split(':')[1]
       mid = df.iloc[i]['mid']
       page = 100
       
       if not os.path.exists(f'./二级评论数据/{mid}-{uid}.csv'):
           print(f'不存在 ./二级评论数据/{mid}-{uid}.csv')
           df_list = []
           max_id_set = set()
           max_id = ''
    
           
           for j in range(page):
               if max_id in max_id_set:
                   break
               else:
                   max_id_set.add(max_id)
               if j == 0:
                   max_id, df_ = get_once_data(uid=uid, mid=mid)
               else:
                   max_id, df_ = get_once_data(uid=uid, mid=mid, the_first=False, max_id=max_id)
               if df_.shape[0] == 0 or max_id == 0:
                   break
               else:
                   df_list.append(df_)
                   print(f'{mid}{j}页解析完毕!max_id:{max_id}')
           if df_list:
               outdf = pd.concat(df_list).astype(str).drop_duplicates()
               print(f'文件长度为{outdf.shape[0]},文件保存为 ./二级评论数据/{mid}-{uid}.csv')
               outdf.to_csv(f'./二级评论数据/{mid}-{uid}.csv', index=False)
           else:
               pass
       else:
           print(f'存在 ./二级评论数据/{mid}-{uid}.csv')

代码运行结果
微博数据爬虫,数据采集 Python爬虫,python,爬虫,新浪微博
完成!

问题汇总

csv文件乱码

df.to_csv(...) 改为 df.to_csv(..., encoding='utf_8_sig')文章来源地址https://www.toymoban.com/news/detail-839795.html

到了这里,关于微博数据采集,微博爬虫,微博网页解析,完整代码(主体内容+评论内容)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 爬虫程序采集网络数据

    目录 一、Xampp搭建本地网站 二、认识Html标签 三、爬虫程序范例  (一)调用模块  (二)加载需要爬虫的网址 (三)爬取内容选取  (四)爬取内容保存 (五) 完整爬虫程序 第一步:启动web服务:运行Xampp,启动Apache.    第二步:设置本地网站    此时,本地网站地址就

    2024年02月10日
    浏览(54)
  • 淘宝爬虫评论数据采集的探索之旅

    随着互联网的普及,淘宝作为中国最大的电商平台,每天都有大量的用户在上面购物。为了更好地了解商品的质量和用户的满意度,许多消费者开始关注商品的评论数据。然而,手动翻阅大量的评论不仅耗时,而且容易遗漏重要的信息。因此,我们需要一种自动化工具来帮助

    2024年01月24日
    浏览(50)
  • 在iPhone上构建自定义数据采集完整指南

    在iPhone上构建自定义数据采集工具可以帮助我们更好地满足特定需求,提高数据采集的灵活性和准确性。本文将为您提供一份完整的指南和示例代码,教您如何在iPhone上构建自定义数据采集工具。 自定义数据采集工具的核心组件 a、数据模型 数据模型是数据采集工具的基础,

    2024年02月09日
    浏览(35)
  • 爬虫代理在数据采集中的应用详解

    随着互联网技术的不断发展,数据采集已经成为了各个行业中必不可少的一项工作。在数据采集的过程中,爬虫代理的应用越来越受到了重视。本文将详细介绍爬虫代理在数据采集中的应用。 什么是爬虫代理? 爬虫代理是指利用代理服务器来隐藏真实的IP地址,从而保护数据

    2024年02月07日
    浏览(43)
  • 爬虫数据采集违法吗?什么样的行为使用爬虫是违法的

    爬虫技术本身是不违法的,它只是一个工具,会造成违法后果的是使用工具的人的不当行为。那么想要合理正确的使用爬虫数据,就要知道哪些行为是不能做的。下面小编会在下面的文章详细介绍什么样的行为使用爬虫是违法的。 1.爬取商业数据 如果只是爬取行业内公开在万

    2024年02月14日
    浏览(61)
  • 批量爬虫采集大数据的技巧和策略分享

    作为一名专业的爬虫程序员,今天主要要和大家分享一些技巧和策略,帮助你在批量爬虫采集大数据时更高效、更顺利。批量爬虫采集大数据可能会遇到一些挑战,但只要我们掌握一些技巧,制定一些有效的策略,我们就能在数据采集的道路上一帆风顺。 1、设立合理的请求

    2024年02月12日
    浏览(39)
  • Python爬虫/SAP-SRM数据采集

    系统版本:SAP系统NetWeaver。SRM主要功能如下图,其中需求预测、采购执行监控、寄售库存监控是业务计划有关的数据,使用频率最高。 ​数据采集范围 ​SAP/SRM系统界面 对于使用SRM的供应商来说,他们频繁登录SRM系统多有不便,SRM数据无法与自己公司信息系统对接,导致业务

    2024年02月12日
    浏览(50)
  • Python爬虫实战:自动化数据采集与分析

    在大数据时代,数据采集与分析已经成为了许多行业的核心竞争力。Python作为一门广泛应用的编程语言,拥有丰富的爬虫库,使得我们能够轻松实现自动化数据采集与分析。本文将通过一个简单的示例,带您了解如何使用Python进行爬虫实战。 一、环境准备 首先,确保您已经

    2024年02月11日
    浏览(54)
  • 一个月学通Python(二十八):Python网络数据采集(爬虫)概述(爬虫)

    结合自身经验和内部资料总结的Python教程,每天3-5章,最短1个月就能全方位的完成Python的学习并进行实战开发,学完了定能成为大佬!加油吧!卷起来! 全部文章请访问专栏:《Python全栈教程(0基础)》 爬虫(crawler)也经常被称为网络蜘蛛(spider),是按照一定的规则自

    2024年02月14日
    浏览(54)
  • 自动切换HTTP爬虫ip助力Python数据采集

    在Python的爬虫世界里,你是否也被网站的IP封锁问题困扰过?别担心,我来教你一个终极方案,让你的爬虫自动切换爬虫ip,轻松应对各种封锁和限制!快来跟我学,让你的Python爬虫如虎添翼! 首先,让我们来了解一下自动切换爬虫ip的终极方案是什么? 自动切换爬虫ip方案:

    2024年02月13日
    浏览(57)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包