Hextrakt is a crawler created by SEO experts for your SEO technical audits. Its purpose is to provide a comprehensive technical overview and in-depth analysis for all types of website, including big ones. We designed it to be always at your fingertips, ready to crawl, running fast crawls to get results quickly, with an easy to use and ergonomic interface. We will extend this user guide as needed. Feel free to ask us if you don't find something here or in the FAQ.
Any feedback about Hextrakt is welcome, we’ll try to improve it as best we can. Contact us on twitter.
Asynchronous & adaptive crawls
Most of the time we don't know how many parallel connections a server may support, and launching a crawl too fast can slow down the server by being too resource intensive. We also want to be able to continue using our computer during a crawl. That's why we developed an asynchronous and adaptive technology: Hextrakt analyses the server responses and the client computer capacity to analyze the crawled URLs, automatically setting up the best crawl speed. If needed, you can set the number of parallel connections in the crawl configuration.
Hextrakt's main menu gives access to these crawl reports :
- Directives : indexability, canonical errors, hreflang, mobile alternate pages and AMP pages
- On page : duplicate content, titles..., titles and meta descriptions, content and markup (Hn titles, word count, structured data and social markup, HTML5 sections), images
- Internal linking, external outlinks, broken and redirected links
- Performance : status codes, load time, size...
- Analytics and Search console data : actives pages, orphan pages, pages with SERP impressions
- History : crawls comparisons (new pages, new errors, new or lost URLs...)
To analyze the collected data and make it meaningful, it is strongly recommended to, first, categorize the URLs with tags, for example by template or thematic. You will then easily find insights using the graphs provided in each report, using a "by tag" distribution.