A Quick Script to Find Any Broken Links on Your Site 🎯

Introduction

It seems like almost every other click on the internet ends up in an "Error 404: Page Not Found" page. "Whoops, the page you're looking for does not exist," "Sorry, the requested URL was not found on this server," "Oops, something went wrong. Page not found." Every internet user has seen pages like these.

I think it's important that web developers consider paying less attention to building clever 404 pages, and start eliminating broken links altogether.

The Program

I've built an automated program to find broken links.

Program Demo

Written in Python 3, it recursively follows links on any given site and checks each one for 404 errors. When the program has finished searching an entire site, it prints out any found broken links and where those links are so that developers can fix them.

Note that the program does make a lot of HTTP requests in a relatively short period of time, so be aware of Internet usage rates and the like.

Usage

  1. Check if you have Python 3 installed:

If the following command does not yield a version number, download Python 3 from python.org.

1$ python3 -V 2
  1. Download the Requests and BeautifulSoup package (for HTML parsing) with PyPi.

(Note: I do not maintain these packages and am not associated with them, so download at your own risk)

1$ pip3 install requests 2$ pip3 install beautifulsoup4 3
  1. Copy paste the following code into a file (I use the name find_broken_links.py in this article).
1import requests 2import sys 3from bs4 import BeautifulSoup 4from urllib.parse import urlparse 5from urllib.parse import urljoin 6 7searched_links = [] 8broken_links = [] 9 10def getLinksFromHTML(html): 11 def getLink(el): 12 return el["href"] 13 return list(map(getLink, BeautifulSoup(html, features="html.parser").select("a[href]"))) 14 15def find_broken_links(domainToSearch, URL, parentURL): 16 if (not (URL in searched_links)) and (not URL.startswith("mailto:")) and (not ("javascript:" in URL)) and (not URL.endswith(".png")) and (not URL.endswith(".jpg")) and (not URL.endswith(".jpeg")): 17 try: 18 requestObj = requests.get(URL); 19 searched_links.append(URL) 20 if(requestObj.status_code == 404): 21 broken_links.append("BROKEN: link " + URL + " from " + parentURL) 22 print(broken_links[-1]) 23 else: 24 print("NOT BROKEN: link " + URL + " from " + parentURL) 25 if urlparse(URL).netloc == domainToSearch: 26 for link in getLinksFromHTML(requestObj.text): 27 find_broken_links(domainToSearch, urljoin(URL, link), URL) 28 except Exception as e: 29 print("ERROR: " + str(e)); 30 searched_links.append(domainToSearch) 31 32find_broken_links(urlparse(sys.argv[1]).netloc, sys.argv[1], "") 33 34print("\n--- DONE! ---\n") 35print("The following links were broken:") 36 37for link in broken_links: 38 print ("\t" + link) 39
  1. Run on command line with a website of your choice.
1$ python3 find_broken_links.py https://your_site.com/ 2

Conclusion

I hope you found this useful, and it certainly helped me find a few broken links on my own site.

This program is CC0 Licensed, so it is completely free to use, but makes no warranties or guarantees.

If you liked this post, share it with your friends and colleagues!

Thanks for scrolling.

— Gabriel Romualdo, November 10, 2019