urllib vs urllib2 in Python - fetch the content of 404 or raise exception?
urllib
examples/python/try_urllib.py
from __future__ import print_function import urllib, sys def fetch(): if len(sys.argv) != 2: print("Usage: {} URL".format(sys.argv[0])) return url = sys.argv[1] f = urllib.urlopen(url) html = f.read() print(html) fetch()
Running python try_urllib.py https://www.python.org/xyz will print a big HTML page because https://www.python.org/xyz is a big HTML page.
urllib2
examples/python/try_urllib2.py
from __future__ import print_function import urllib2, sys def fetch(): if len(sys.argv) != 2: print("Usage: {} URL".format(sys.argv[0])) return url = sys.argv[1] try: f = urllib2.urlopen(url) html = f.read() print(html) except urllib2.HTTPError as e: print(e) fetch()
Running python try_urllib2.py https://www.python.org/xyz will print
HTTP Error 404: OK
Published on 2015-07-06
If you have any comments or questions, feel free to post them on the source of this page in GitHub. Source on GitHub.
Comment on this post