时间:2021-05-22
本文为大家分享了python实现的一个多线程网页下载器,供大家参考,具体内容如下
这是一个有着真实需求的实现,我的用途是拿它来通过 HTTP 方式向服务器提交游戏数据。把它放上来也是想大家帮忙挑刺,找找 bug,让它工作得更好。
keywords:python,http,multi-threads,thread,threading,httplib,urllib,urllib2,Queue,http pool,httppool
废话少说,上源码:
# -*- coding:utf-8 -*- import urllib, httplib import thread import time from Queue import Queue, Empty, Full HEADERS = {"Content-type": "application/x-patible; MSIE 6.0;Windows NT 5.0)', "Accept": "text/plain"} UNEXPECTED_ERROR = -1 POST = 'POST' GET = 'GET' def base_log(msg): print msg def base_fail_op(task, status, log): log('fail op. task = %s, status = %d'%(str(task), status)) def get_remote_data(tasks, results, fail_op = base_fail_op, log = base_log): while True: task = tasks.get() try: tid = task['id'] hpt = task['conn_args'] # hpt <= host:port, timeout except KeyError, e: log(str(e)) continue log('thread_%s doing task %d'%(thread.get_ident(), tid)) #log('hpt = ' + str(hpt)) conn = httplib.HTTPConnection(**hpt) try: params = task['params'] except KeyError, e: params = {} params = urllib.urlencode(params) #log('params = ' + params) try: method = task['method'] except KeyError: method = 'GET' #log('method = ' + method) try: url = task['url'] except KeyError: url = '/' #log('url = ' + url) headers = HEADERS try: tmp = task['headers'] except KeyError, e: tmp = {} headers.update(tmp) #log('headers = ' + str(headers)) headers['Content-Length'] = len(params) try: if method == POST: conn.request(method, url, params, headers) else: conn.request(method, url + params) response = conn.getresponse() except Exception, e: log('request failed. method = %s, url = %s, params = %s headers = %s'%( method, url, params, headers)) log(str(e)) fail_op(task, UNEXPECTED_ERROR, log) continue if response.status != httplib.OK: fail_op(task, response.status, log) continue data = response.read() results.put((tid, data), True) class HttpPool(object): def __init__(self, threads_count, fail_op, log): self._tasks = Queue() self._results = Queue() for i in xrange(threads_count): thread.start_new_thread(get_remote_data, (self._tasks, self._results, fail_op, log)) def add_task(self, tid, host, url, params, headers = {}, method = 'GET', timeout = None): task = { 'id' : tid, 'conn_args' : {'host' : host} if timeout is None else {'host' : host, 'timeout' : timeout}, 'headers' : headers, 'url' : url, 'params' : params, 'method' : method, } try: self._tasks.put_nowait(task) except Full: return False return True def get_results(self): results = [] while True: try: res = self._results.get_nowait() except Empty: break results.append(res) return results def test_google(task_count, threads_count): hp = HttpPool(threads_count, base_fail_op, base_log) for i in xrange(task_count): if hp.add_task(i, 'www.google.cn', '/search?', {'q' : 'lai'}, # method = 'POST' ): print 'add task successed.' while True: results = hp.get_results() if not results: time.sleep(1.0 * random.random()) for i in results: print i[0], len(i[1]) # print unicode(i[1], 'gb18030') if __name__ == '__main__': import sys, random task_count, threads_count = int(sys.argv[1]), int(sys.argv[2]) test_google(task_count, threads_count)有兴趣想尝试运行的朋友,可以把它保存为 xxxx.py,然后执行 python xxxx.py 10 4,其中 10 表示向 google.cn 请求 10 次查询,4 表示由 4 条线程来执行这些任务。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
python是支持多线程的,主要是通过thread和threading这两个模块来实现的,本文主要给大家分享python实现多线程网页爬虫一般来说,使用线程有两
前面已经演示了Python:使用threading模块实现多线程编程二两种方式起线程和Python:使用threading模块实现多线程编程三threading
本文实例讲述了Python实现多线程的两种方式。分享给大家供大家参考,具体如下:目前python提供了几种多线程实现方式thread,threading,mul
本文实例讲述了java多线程下载。分享给大家供大家参考,具体如下:使用多线程下载文件可以更快完成文件的下载,多线程下载文件之所以快,是因为其抢占的服务器资源多。
学过Python的人应该都知道,Python是支持多线程的,并且是native的线程。本文主要是通过thread和threading这两个模块来实现多线程的。p