Technical SEO Audit: The Complete 2026 Checklist for Australian Businesses
A solid SEO strategy rests on three pillars: content, authority, and technical foundation. You can write the best article in Australia, but if your site’s technical infrastructure is broken, Google will struggle to crawl it, index it, and rank it.
A technical SEO audit is the systematic process of finding and fixing the technical issues holding your site back. Unlike content or link-building work, technical SEO is measurable, repeatable, and often the quickest way to improve rankings—because you’re removing barriers rather than building new authority.
This guide walks you through everything you need to audit your own site, or use to brief a technical SEO agency. We’ve organised it by category so you can tackle issues in priority order.
What Is a Technical SEO Audit (And Why It Matters)
A technical SEO audit examines how Google and other search engines interact with your website. It looks at:
- Crawlability: Can Google’s bots actually reach your pages?
- Indexation: Are pages being indexed, or blocked?
- Core Web Vitals: Is your site fast enough to rank well?
- Mobile friendliness: Does it work on devices?
- Structured data: Are search engines understanding your content correctly?
- Security and canonicalisation: Are you sending the right signals to search engines?
The payoff is real. Fixing technical issues often leads to:
- 10–30% increase in indexed pages (when you remove blocks)
- 15–40% faster load times (when you optimise Core Web Vitals)
- 5–20% uplift in rankings for existing content (when crawl budget is freed up)
- Cleaner crawl logs and fewer Google Search Console errors
The order matters, though. You don’t optimise images before fixing canonicalisation—you’d be wasting time. That’s why we’ve structured this guide by priority.
Priority Order: What to Fix First
If you only have limited time and budget, fix issues in this order:
- Indexation issues (noindex pages, blocked content) — These block your entire site from ranking.
- Canonicalisation and redirects (301/302 issues, self-referential canonicals) — These dilute ranking potential.
- Mobile usability and Core Web Vitals (LCP, INP, CLS) — Google has made these explicit ranking factors.
- Crawlability (robots.txt, blocked resources) — These prevent Google from seeing your content.
- Site structure and internal linking (orphaned pages, poor hierarchy) — These affect how authority flows through your site.
- Schema markup and structured data (missing, incorrect, or conflicting schemas) — These improve SERP appearance and rich results.
- Security and HTTPS (mixed content, SSL issues) — These are ranking factors and trust signals.
Now let’s walk through each category in detail.
1. Indexation: The Foundation
If pages aren’t indexed, they can’t rank. Before anything else, verify what Google actually has indexed.
What to Check
Indexed page count in Google Search Console (GSC)
- Log in to Google Search Console
- Go to Indexing > Pages
- Note the total indexed pages
- Compare this to your expected site size (e.g., if you have 500 blog posts, you should see roughly that number indexed)
Sitemaps are submitted and valid
- In GSC, go to Sitemaps
- Check that your XML sitemap is submitted and showing “Success”
- Compare indexed vs submitted; a big gap suggests excluded pages
No widespread noindex tags
- Run a site crawl with Screaming Frog (free version works for up to 500 URLs)
- Filter for pages with
noindexmeta tags - Ask: should these pages be noindex? (Common mistake: accidentally noindexing entire site sections)
How to Check It
Tools:
- Google Search Console (free, essential)
- Screaming Frog SEO Spider (free up to 500 URLs; paid: AU$199/year)
- Ahrefs Site Audit (paid; AU$99/month+) for larger sites
How to interpret:
- Indexed count is 80–100% of submitted: Good.
- Indexed count is 50–80% of submitted: You have exclusions (noindex, robots.txt blocks, or poor linking). Investigate.
- Indexed count is <50% of submitted: Major issue. Pages are either blocked or genuinely orphaned.
What Passing Looks Like
- Your entire site (except intentional noindex pages like privacy, login, duplicates) is indexed
- GSC shows indexed count matching your expected page count
- Sitemap is green and valid
- No widespread “Discovered – not indexed” errors in GSC
What Failing Looks Like
- GSC shows only 200 indexed pages but you have 1,000+ on your site
- Large sections of your site are marked “noindex” unintentionally
- Sitemap shows red errors (invalid XML, too many URLs, etc.)
- Pages have
when they shouldn’t
How to Fix It
If pages are noindex by accident:
- Find pages with unintended noindex (use Screaming Frog filter)
- Remove the noindex tag from the template or page
- Do NOT remove the tag from the HTML yet—wait for Google to re-crawl
- Go to GSC URL Inspection for one of the affected pages
- Click Request indexing
- Google will re-crawl within days
If pages are blocked in robots.txt:
- Check your
/robots.txtfile (visityoursite.com/robots.txt) - Look for broad blocks like:
“ User-agent: * Disallow: / “
- If unintentional, fix it (or remove the entire block)
- Request indexing again in GSC
If pages are orphaned (no internal links):
- Use Screaming Frog to find pages with 0 internal links
- Add relevant internal links to these pages from your main navigation or content
- Once linked, they’ll eventually be discovered and indexed
2. Canonicalisation and Redirects
Canonicalisation tells Google which version of a page is the “main” one when duplicates exist. Bad canonicals dilute your ranking potential by splitting authority across multiple URLs.
What to Check
Self-referential canonicals
Every page should have a self-referential canonical (pointing to itself). Check:
- Visit your homepage and a few content pages
- View the page source (Ctrl+U or Cmd+U in Chrome)
- Look for a line like:
“html “
- Verify the URL matches the page you’re viewing exactly
Redirect chains (bad)
If URL A redirects to URL B, and URL B redirects to URL C, that’s a redirect chain. Google dislikes these.
- Use Redirect Checker or Screaming Frog
- Enter URLs that have been rewritten or moved
- Check the redirect path—it should be ≤2 hops (direct 301 is ideal)
Https vs http consistency
If you’ve migrated to HTTPS, ensure:
- HTTP version (non-secure) redirects to HTTPS
- Old URLs don’t have canonical tags pointing to different versions
- Search Console has both versions added (HTTP and HTTPS)
How to Check It
Tools:
- Chrome DevTools (right-click > Inspect, search for
) - Screaming Frog (Crawl → check Canonicalisation column)
- Ahrefs Site Audit
Steps:
- Crawl your site with Screaming Frog (or equivalent)
- Export the crawl data
- Filter for pages with canonical issues (self-canonical mismatches, pointing to noindex pages, or external canonicals)
What Passing Looks Like
- Every page has a self-referential canonical
- No redirect chains (A → B → C)
- HTTPS and HTTP are consistently redirected or canonicalised
- Canonicals point to publicly accessible pages (not noindex, not blocked)
What Failing Looks Like
- Pages missing canonical tags entirely
- Canonicals pointing to HTTPS when the site is HTTP (or vice versa)
- Redirect chains: oldurl.com → interim.com → newsite.com
- Canonicals pointing to pages that return 404 or 410
How to Fix It
Add missing canonicals:
- If using WordPress with Yoast or RankMath, they auto-add self-referential canonicals—nothing to do
- If custom site: add
to theof every page - Test one page with URL Inspection in GSC to confirm Google sees the canonical
Break redirect chains:
- Identify the final destination (e.g., newsite.com)
- Update any intermediate redirects to point directly to the destination
- Example (bad): A → B → C. Fix: A → C, B → C
Migrate HTTPS without losing rankings:
- Set up SSL certificate (most hosts include Let's Encrypt free)
- Update all internal links to HTTPS URLs
- Set up 301 redirects from HTTP to HTTPS at the server level (.htaccess on Apache, or server config on Nginx)
- Update XML sitemap to reference HTTPS URLs
- In GSC, add the HTTPS version as a new property
- Submit sitemap to HTTPS property
- In GSC Settings for old HTTP property, change the "Preferred domain" to HTTPS (under "Site settings")
3. Mobile Friendliness and Core Web Vitals
Google has explicitly stated that page experience (including mobile usability and Core Web Vitals) is a ranking factor. This is non-negotiable in 2026.
What to Check
Mobile usability
- Go to GSC Mobile usability
- Check for errors:
- Clickable elements too close together (button spacing issues)
- Text too small to read
- Viewport not configured (no
tag)
Core Web Vitals
Three metrics matter:
- LCP (Largest Contentful Paint): Time until the largest element on the page loads. Target: <2.5 seconds.
- INP (Interaction to Next Paint): Time from user interaction (click) to the page responding. Target: <200 milliseconds. (This replaced FID in 2024.)
- CLS (Cumulative Layout Shift): Visual stability as the page loads. Target: <0.1.
How to Check It
Tools:
- Google Search Console > Core Web Vitals (free, real-world data from actual visitors)
- PageSpeed Insights (free, at pagespeedinsights.web.dev)
- WebPageTest (free, webpagetest.org)
Steps:
- Open PageSpeed Insights for your homepage and a typical blog post
- Check the "Core Web Vitals" section
- In GSC, go to Core Web Vitals to see real-world data (this is what actually affects rankings)
- Look for pages with "Poor" status
What Passing Looks Like
- LCP: >75% of page views have LCP <2.5s
- INP: >75% of page views have INP <200ms
- CLS: >75% of page views have CLS <0.1
- No mobile usability errors in GSC
- PageSpeed Insights shows "Good" (green) for Core Web Vitals
What Failing Looks Like
- LCP 4–5 seconds (common on image-heavy sites)
- INP 300–500ms (common on sites with heavy JavaScript)
- CLS 0.2+ (pages shift as ads or late-loading content appears)
- Mobile usability errors visible in GSC
- PageSpeed shows "Poor" (red) for any metric
How to Fix It
Improve LCP (load time of largest element):
- Compress images (use WebP format, target <100KB per image)
- Lazy-load images below the fold (native
loading="lazy") - Upgrade hosting or use a CDN (Cloudflare free tier helps)
- Minimise render-blocking CSS/JavaScript (defer non-critical JS)
- Enable browser caching
Improve INP (response time to user interaction):
- Defer heavy JavaScript (don't run large scripts on page load)
- Break up long JavaScript tasks into smaller chunks
- Remove or replace slow third-party scripts (analytics, ads, chat widgets)
- Optimise event listeners (don't attach listeners to every element—use event delegation)
Improve CLS (layout stability):
- Set explicit dimensions on images (width and height attributes)
- Avoid inserting content above existing content (e.g., sticky ads that push content down)
- Use
transforminstead oftop/leftfor animations (GPU-accelerated, doesn't trigger layout shifts)
Fix mobile usability issues:
- Use Google's Mobile-Friendly Test to identify specific issues
- Ensure buttons and links are at least 48x48 pixels
- Use responsive design (same HTML, different CSS for mobile)
- Test on actual devices, not just Chrome DevTools
4. Crawlability: Letting Google In
If Google can't crawl your site, it can't index it. Crawlability issues are usually about blocking, but sometimes about structure.
What to Check
robots.txt file
- Visit
yoursite.com/robots.txt - Check for:
- Broad blocks:
Disallow: /(blocks entire site—usually a mistake) - Blocking important folders:
Disallow: /blog/(blocks all blog posts) - Unnecessary Disallow rules
Blocked resources in GSC
- In GSC, go to Settings > Crawl stats
- Check "Blocked by robots.txt" and "Other resources blocked"
- High numbers here mean Google can't crawl parts of your site
Blocked resources (CSS, JS, images)
- Go to GSC Coverage > Excluded
- Look for reasons like "Blocked by robots.txt" or "Crawled but not indexed"
- Check if critical resources are blocked in robots.txt
How to Check It
Tools:
- Google Search Console (free)
- Screaming Frog (crawl with "obey robots.txt" enabled, then disabled, and compare)
Steps:
- Visit your
robots.txt - Note any Disallow rules
- In Screaming Frog, crawl your site twice—once with robots.txt obeyed, once without
- Compare the results; the difference shows what's being blocked
- Ask: is this intentional?
What Passing Looks Like
- robots.txt exists and is minimal
- No broad site-wide blocks (unless you intentionally want a private site)
- Important resources (CSS, JS, images for public pages) are not blocked
- GSC Crawl Stats show low "Blocked by robots.txt" count
What Failing Looks Like
- robots.txt blocks
/blog/or other major sections - robots.txt blocks all CSS and JavaScript (prevents Google from seeing modern JavaScript-rendered content)
- Crawl stats show thousands of blocked URLs
How to Fix It
Update robots.txt:
A typical good robots.txt looks like:
``` User-agent: * Disallow: /private/ Disallow: /admin/ Disallow: /cart/ Disallow: /account/ Allow: /
Sitemap: https://yoursite.com/sitemap.xml ```
Key rules:
- Only block pages you don't want indexed (private areas, login pages, cart, etc.)
- Do NOT block CSS, JS, or images
- Do NOT use Disallow: / (blocks entire site)
- Edit your
robots.txt(usually in your site root) - Remove unnecessary blocks
- Add
Sitemap:line pointing to your XML sitemap - Save and test in Google's robots.txt Tester
In GSC, re-request crawl:
- After fixing robots.txt, request Google to re-crawl
- Visit Crawl stats again 24–48 hours later
- "Blocked by robots.txt" count should drop
5. Site Structure and Internal Linking
Your site's architecture determines how authority flows and how easily Google can navigate your content.
What to Check
Orphaned pages (0 internal links)
- Crawl site with Screaming Frog
- Filter for pages with 0 inlinks
- These pages won't be discovered organically—you must link to them
Hierarchy and category structure
- Map your site structure:
- Root domain (homepage)
- Top-level categories
- Subcategories (if applicable)
- Individual pages
- Ask: is it logical? Flat is usually better than deep.
Anchor text in internal links
- Audit internal links for anchor text quality
- "Click here" is useless; "technical SEO audit" is valuable
- Use Screaming Frog to export internal links and review anchor text
How to Check It
Tools:
- Screaming Frog (inlink analysis, site visualization)
- GSC Links report (shows external links, not internal)
Steps:
- Crawl with Screaming Frog
- Go to Inlinks and sort by inlinks (ascending)
- Pages with 0 inlinks are orphaned
- Check your navigation and main content areas—most pages should be 1–3 clicks from the homepage
What Passing Looks Like
- No orphaned pages (every page has at least 1 internal link)
- Homepage is 0 clicks, main sections are 1–2 clicks, content is 2–3 clicks
- Internal anchor text includes keywords and is descriptive
- Top pages (by authority) link to important pages
What Failing Looks Like
- Hundreds of orphaned pages
- Deep nested structure (5+ levels to reach a page)
- Anchor text like "click here" or "more"
- Low-authority pages don't link to important pages
How to Fix It
Link orphaned pages:
- Find pages with 0 inlinks (use Screaming Frog)
- For each: add a relevant internal link from your main navigation, category page, or related content
- Use descriptive anchor text that includes a relevant keyword
Flatten your hierarchy:
- Avoid more than 3 levels of nesting (Homepage > Category > Subcategory > Page is the limit)
- Link important pages from the homepage or main nav
- Use breadcrumb navigation so users and Google understand hierarchy
Audit and improve anchor text:
- Export internal links from Screaming Frog
- Replace generic anchor text ("read more", "click here") with descriptive text
- Example: "For more on Core Web Vitals, see our Core Web Vitals Australia guide" instead of "Read more here"
6. Schema Markup and Structured Data
Structured data (schema markup) helps Google understand your content and can unlock rich results (star ratings, FAQs, recipes, etc.).
What to Check
Presence of schema markup
- Visit homepage and a few blog posts
- View page source (Ctrl+U)
- Look for
blocks - Common schemas for a content site:
Organization,Article,BlogPosting,FAQPage,BreadcrumbList
Validity of schema
- Use Google's Rich Results Test (free)
- Paste your homepage and a content page URL
- Check for "Errors" (must fix) vs "Warnings" (should fix)
Coverage (are all pages marked up?)
- In GSC, go to Enhancements > Rich Results
- Check which pages have valid structured data
- Ask: are all your articles marked up, or just some?
How to Check It
Tools:
- Google Rich Results Test (free, at search.google.com/test/rich-results)
- Schema.org (schema.org, reference documentation)
- Screaming Frog (crawl and check for
script[type="application/ld+json"]) - Structured Data Testing Tool (legacy, but still useful)
Steps:
- Test your homepage in Rich Results Test
- Test a blog post in Rich Results Test
- Check GSC Rich Results for coverage
- Look for "Errors" and note them
What Passing Looks Like
- All pages have at least
Organizationschema (with name, logo, contact info) - All blog posts have
ArticleorBlogPostingschema with headline, image, published date, author - No "Errors" in Rich Results Test (Warnings are okay)
- GSC shows "Rich results detected" for most content pages
What Failing Looks Like
- No schema markup at all
- Schema present but with errors (e.g., "author" field missing)
- Conflicting schemas (multiple Article schemas on one page)
- Schema marked as "Errors" in Rich Results Test
How to Fix It
If using WordPress:
- Install RankMath or Yoast SEO (both auto-generate schema)
- For RankMath: Settings > Schema > Enable auto-schema for Post Type
- For Yoast: SEO > Tools > System admin, scroll to "Structured data" section and enable
- Test a post with Rich Results Test to confirm
If custom site:
Add schema to your page :
For a blog post: ``html ``
For your organisation (in homepage ): ``html ``
7. HTTPS and Site Security
HTTPS (Secure Sockets Layer/TLS) is an explicit ranking factor and a trust signal. Google has prioritised it since 2014.
What to Check
HTTPS enabled on all pages
- Visit your homepage—does the URL start with
https://? - Check the address bar: is there a green lock icon?
- Visit a few pages and confirm
No mixed content (HTTP resources on HTTPS page)
- Visit your HTTPS homepage
- Open Chrome DevTools (F12)
- Go to Console tab
- If you see errors about "insecure resources" or "mixed content", you have a problem
SSL certificate validity
- Click the lock icon in the address bar
- Click "Certificate is valid" (or "This site is not secure" if it's not)
- Check the expiry date—should be ≥6 months away
How to Check It
Tools:
- Chrome DevTools (F12 > Console, look for mixed content warnings)
- Why No HTTPS? (identifies mixed content issues)
- SSL Labs (ssllabs.com, validates certificate strength)
Steps:
- Visit your site—does it have HTTPS?
- Check the Chrome console for mixed content warnings
- Click the lock icon to view certificate details
- Run through Why No HTTPS? for a full report
What Passing Looks Like
- All pages are HTTPS
- Green lock icon in address bar
- No mixed content errors in console
- Certificate is valid and not expiring soon
What Failing Looks Like
- Site is HTTP (not secure)
- Mixed content warnings in DevTools console
- Certificate is expired or self-signed
- Browser shows "This site is not secure"
How to Fix It
Migrate from HTTP to HTTPS:
- Get an SSL certificate:
- Most hosting providers include free Let's Encrypt certificates
- No need to pay for expensive certificates in 2026
- Enable HTTPS in your hosting control panel (cPanel, Plesk, etc.)
- Set up automatic redirects from HTTP to HTTPS (.htaccess for Apache):
``apache RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ``
- Update your WordPress home and site URL to HTTPS (Settings > General)
- Update XML sitemap to reference HTTPS URLs
- Update internal links to HTTPS
- Test with "Why No HTTPS?" tool
Fix mixed content (resources loaded via HTTP):
- Identify which resources are insecure (check console in DevTools)
- Update image URLs, scripts, stylesheets to HTTPS
- For external resources you don't control, load via protocol-relative URLs:
//domain.com/resourceinstead ofhttp://domain.com/resource
8. JavaScript and Rendering Issues
Modern sites use JavaScript heavily. Google can render JavaScript, but it's slower and sometimes buggy.
What to Check
JavaScript is not required to load critical content
- Open your homepage in an incognito window
- Open DevTools (F12)
- Go to Network tab
- Disable JavaScript: Ctrl+Shift+P, type "Disable JavaScript", press Enter
- Does the page still show your main content (headline, body text)? If yes, good.
Third-party scripts aren't blocking page load
- In DevTools Network tab, reload the page
- Look for slow-loading scripts (ads, analytics, chat widgets)
- Anything >3 seconds is blocking performance
How to Check It
Tools:
- Chrome DevTools (Network, Lighthouse)
- Lighthouse report (in Chrome, right-click > Inspect > Lighthouse tab)
Steps:
- Open Lighthouse in DevTools
- Run an audit for Performance
- Look for "Opportunities" section
- "Reduce JavaScript execution time" and "Unused JavaScript" are common issues
What Passing Looks Like
- Critical content (headline, body text) loads with JavaScript disabled
- No render-blocking JavaScript in Network tab
- Lighthouse Performance score >80
What Failing Looks Like
- Page is blank with JavaScript disabled
- Large third-party scripts taking 5+ seconds
- Lighthouse Performance score <50
How to Fix It
Defer non-critical JavaScript:
- In your HTML, change:
``html ` To: `html ` The defer` attribute tells the browser to load the script after the page renders.
- For scripts that aren't needed immediately, use
asyncinstead
Lazy-load third-party scripts:
For ads and chat widgets, load them only when needed:
``html ``
9. Common Technical SEO Issues and Fixes
We've covered the main categories. Here are some other common issues worth monitoring:
Redirect loops
- Page A redirects to B, B redirects back to A
- Fix: trace the redirect chain and break the loop
4xx and 5xx errors
- GSC Coverage shows pages returning 404 (not found) or 500 (server error)
- Fix: either redirect them properly or restore them
Pagination issues
- Old rel=prev/next is deprecated (Google stopped using it in 2019)
- Modern approach: use
rel="next"andrel="prev"only if pages are truly sequential, or just use direct links between pages
Hreflang tags
- If you have content in multiple languages or regions, use hreflang to tell Google
- For Australian site with content for AU, NZ, UK: add
Crawl budget
- Large sites (50,000+ pages) need to be mindful of crawl budget
- Remove low-value pages from indexing (thin pages, outdated content)
- Fix crawlability issues so Google spends crawl budget on important content
Creating Your Audit Checklist
Use this checklist to conduct your own audit:
- [ ] GSC indexed page count matches expected
- [ ] Sitemaps submitted and valid
- [ ] No widespread unintentional noindex tags
- [ ] Every page has self-referential canonical
- [ ] No redirect chains (≤2 hops)
- [ ] HTTPS is enabled site-wide
- [ ] Mobile usability: no errors in GSC
- [ ] Core Web Vitals: >75% of traffic meets targets
- [ ] robots.txt exists and is minimal
- [ ] CSS, JS, images are not blocked in robots.txt
- [ ] No orphaned pages (every page has ≥1 internal link)
- [ ] Site hierarchy is logical (≤3 levels deep)
- [ ] Internal anchor text is descriptive
- [ ] Organization and Article schema markup is present and valid
- [ ] No mixed content (HTTP on HTTPS)
- [ ] No render-blocking JavaScript
What to Do Next
If you've identified issues, prioritise them:
- Fix indexation issues first (noindex, robots.txt blocks)
- Then canonicalisation and redirects
- Then Core Web Vitals and mobile
- Then crawlability
- Then internal linking and structure
- Then schema markup
For a comprehensive technical SEO audit with a detailed remediation plan tailored to your site, Anitech performs full audits and guides you through the fixes. We'll identify issues, prioritise them, and provide step-by-step remediation instructions.