时间:2021-05-19
某航空公司物流单信息查询,是一个post请求。通过后台模拟POST HTTP请求发现无法获取页面数据,通过查看航空公司网站后,发现网站使用避免CSRF攻击机制,直接发挥40X错误。
读者自行百度
Headers
Form Data
在head里包含了cookie 与 x-csrf-token formdata 里包含了_csrf (与head里的值是一样的).
这里通过查看该网站的JS源代码发现_csrf 来自于网页的head标签里
猜测cookie与x-csrf-token是有一定的有效期,并且他们共同作用来防御CSRF攻击。
1,首先请求一下该航空公司的网站,获取cookie与_csrf
2,然后C# 模拟http分别在head和formdata里加入如上参数,发起请求
代码
public class CSRFToken { string cookie;//用于请求的站点的cookie List<string> csrfs;//用于请求站点的token的key 以及 value public CSRFToken(string url) { //校验传输安全 if (!string.IsNullOrWhiteSpace(url)) { try { //设置请求的头信息.获取url的host var _http = new HttpHelper(url); string cookie; string html = _http.CreateGetHttpResponseForPC(out cookie); this.cookie = cookie; string headRegex = @"<meta name=""_csrf.*"" content="".*""/>"; MatchCollection matches = Regex.Matches(html, headRegex); Regex re = new Regex("(?<=content=\").*?(?=\")", RegexOptions.None); csrfs = new List<string>(); foreach (Match math in matches) { MatchCollection mc = re.Matches(math.Value); foreach (Match ma in mc) { csrfs.Add(ma.Value); } } } catch (Exception e) { } } } public String getCookie() { return cookie; } public void setCookie(String cookie) { this.cookie = cookie; } public List<string> getCsrf_token() { return csrfs; } }httpHelper
public string CreatePostHttpResponse(IDictionary<string, string> headers, IDictionary<string, string> parameters) { HttpWebRequest request = null; //HTTPSQ请求 UTF8Encoding encoding = new System.Text.UTF8Encoding(); ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(CheckValidationResult); request = WebRequest.Create(_baseIPAddress) as HttpWebRequest; request.ProtocolVersion = HttpVersion.Version10; ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11; request.Method = "POST"; request.ContentType = "application/x-"; if (!(headers == null || headers.Count == 0)) { foreach (string key in headers.Keys) { request.Headers.Add(key, headers[key]); } } //如果需要POST数据 if (!(parameters == null || parameters.Count == 0)) { StringBuilder buffer = new StringBuilder(); int i = 0; foreach (string key in parameters.Keys) { if (i > 0) { buffer.AppendFormat("&{0}={1}", key, parameters[key]); } else { buffer.AppendFormat("{0}={1}", key, parameters[key]); } i++; } byte[] data = encoding.GetBytes(buffer.ToString()); using (Stream stream = request.GetRequestStream()) { stream.Write(data, 0, data.Length); } } HttpWebResponse response; try { //获得响应流 response = (HttpWebResponse)request.GetResponse(); Stream s = response.GetResponseStream(); StreamReader readStream = new StreamReader(s, Encoding.UTF8); string SourceCode = readStream.ReadToEnd(); response.Close(); readStream.Close(); return SourceCode; } catch (WebException ex) { response = ex.Response as HttpWebResponse; return null; } } public string CreateGetHttpResponse(out string cookie) { HttpWebRequest request = null; //HTTPSQ请求 UTF8Encoding encoding = new System.Text.UTF8Encoding(); ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(CheckValidationResult); request = WebRequest.Create(_baseIPAddress) as HttpWebRequest; request.ProtocolVersion = HttpVersion.Version10; ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11; request.Method = "GET"; request.ContentType = "application/x-www-form-urlencoded"; request.UserAgent = DefaultUserAgent; HttpWebResponse response; try { //获得响应流 response = (HttpWebResponse)request.GetResponse(); cookie = response.Headers["Set-Cookie"]; Stream s = response.GetResponseStream(); StreamReader readStream = new StreamReader(s, Encoding.UTF8); string SourceCode = readStream.ReadToEnd(); response.Close(); readStream.Close(); return SourceCode; } catch (WebException ex) { response = ex.Response as HttpWebResponse; cookie = ""; return null; } }爬取程序
1,不同的网站,获取cstf的方式不一样,无论怎么做,只要信息传到前台我们都可以有相应的方法来获取。
2,请求时候的http验证可能不一样,测试的某航空公司物流信息的时候,http请求的安全协议是tis12。
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11; 还有其他参数比如UserAgent后台可能也会验证
3,基于如上航空公司,发现它的cookie和cstf_token一定时间内不会改变,那么当实际爬取的时候可以考虑缓存cookie以及cstf_token,只有当请求失败的时候,才重新获取
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
为了分析深圳市所有长租、短租公寓的信息,爬取了某租房公寓网站上深圳区域所有在租公寓信息,以下记录了爬取过程以及爬取过程中遇到的问题:爬取代码:importreq
做爬虫项目时,我们需要考虑一个爬虫在爬取时会遇到各种情况(网站验证,ip封禁),导致爬虫程序中断,这时我们已经爬取过一些数据,再次爬取时这些数据就可以忽略,所以
互联网信息时代下网络数据的爬取,会随着数据量、数据维度及类别的不断增多,而产生数据信息爬取、处理与分析的难题。因此借助于Python程序设计语言及,以及lxml
项目场景:在使用selenium模块进行数据爬取时,通常会遇到爬取iframe中的内容。会因为定位的作用域问题爬取不到数据。问题描述:我们以菜鸟教程的运行实例为
分析需求:爬取西刺代理网免费高匿代理,并保存到MySQL数据库中。这里只爬取前10页中的数据。思路:分析网页结构,确定数据提取规则创建Scrapy项目编写ite