I ran into this while building the InfraTally DNS Propagation Checker. During testing I queried apple.com for TXT records and noticed something strange: Google (8.8.8.8), Cloudflare (1.1.1.1), Quad9, and Verisign all came back empty. OpenDNS returned eight records. Same domain, same record type, five different answers from six resolvers.
The packets were genuinely reaching each resolver's IP address independently via raw UDP socket queries. So why the split result?
The answer is one of those things that is obvious in hindsight and completely invisible until you know to look for it: the 512-byte UDP limit and the truncation flag that most tools ignore.
DNS over UDP has a default maximum response size of 512 bytes. This was established in RFC 1035 back in 1987, when DNS was being designed for networks with much smaller maximum transmission units. The limit has never been formally raised in the base spec.
For most DNS queries this is fine. An A record response for a typical domain fits comfortably in 512 bytes. MX records are slightly larger but still usually fit. TXT records are the problem.
Enterprise domains accumulate TXT records over time the way old servers accumulate cruft. Apple.com has eight of them: SPF records, multiple domain verification tokens for Apple services, Cerner client IDs, Miro verification, Apple domain verification. Each one is a string of 50 to 150 characters. Add them up and you are looking at 800 to 900 bytes of answer data, well over the limit.
Every DNS response starts with a 12-byte header. Most DNS code extracts the transaction ID and answer count and ignores everything in between. The TC bit lives in byte 3, bit 9 of the flags field, and it is the one bit that tells you whether to trust the rest of the response.
Extracting the TC bit in Python is two lines:
import struct
flags = struct.unpack('>H', response[2:4])[0]
tc_bit = (flags >> 9) & 1 # 1 = truncated, 0 = complete response
if tc_bit:
# Do not trust this response — retry over TCP
pass
That is the check most DNS tools skip. If tc_bit is 1, the response is incomplete
and should not be presented to the user as a result.
Before jumping straight to TCP, there is a better first option. EDNS0 (Extension Mechanisms for DNS, RFC 2671, later updated by RFC 6891) was designed specifically to solve the 512-byte problem. It lets a DNS client signal to the resolver that it can handle larger UDP responses, up to 4096 bytes in practice.
The way you signal EDNS0 support is by appending an OPT pseudo-record to the additional section of your query. This record tells the resolver: "I can receive UDP payloads up to N bytes. Send me the full response." Most major resolvers, including Google and Cloudflare, will honor this and return the complete answer in a single UDP packet without truncation.
The OPT record is 11 bytes appended to the query packet:
edns0_opt = (
b'\x00' # Name: root (.)
b'\x00\x29' # Type: OPT (41)
b'\x10\x00' # Class field = UDP payload size (0x1000 = 4096 bytes)
b'\x00\x00\x00\x00' # TTL field = extended RCODE + flags (all zero)
b'\x00\x00' # RDLENGTH: 0 (no options data)
)
You also need to set ARCOUNT to 1 in the query header (there is now one additional record) and increase your receive buffer from 512 to 4096 bytes so you can actually accept the larger response.
# Build query header with ARCOUNT=1 for EDNS0
arcount = 1
header = struct.pack(
'>HHHHHH',
tid, # transaction ID
0x0100, # flags: standard query, recursion desired
1, # QDCOUNT: 1 question
0, # ANCOUNT: 0 answers
0, # NSCOUNT: 0 authority records
arcount # ARCOUNT: 1 additional record (the OPT record)
)
# Receive buffer: 4096 bytes instead of 512
data, _ = sock.recvfrom(4096)
With EDNS0 in place, Google and Cloudflare return the full apple.com TXT record set in a single UDP packet. The truncation problem disappears for the vast majority of real-world cases. Apple.com's 800-900 bytes of TXT records fit comfortably within the 4096-byte buffer.
EDNS0 solves most cases but not all. Some resolvers do not honor the extended buffer size. Some domains have TXT record sets that exceed even 4096 bytes (rare, but possible for large enterprises with aggressive domain verification requirements). And some network paths have issues with large UDP packets that do not affect TCP.
The correct fallback when you still get a truncated response after EDNS0 is to retry over TCP.
DNS over TCP works identically to UDP except for two things: it uses SOCK_STREAM
instead of SOCK_DGRAM, and each message is prefixed with a 2-byte big-endian
length field. TCP has no inherent response size limit.
def query_resolver_tcp(resolver_ip, domain, qtype, timeout=10):
tid, packet = build_dns_query(domain, qtype, use_edns0=False)
# TCP DNS: prefix message with 2-byte length
tcp_packet = struct.pack('>H', len(packet)) + packet
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
try:
sock.connect((resolver_ip, 53))
sock.sendall(tcp_packet)
# Read 2-byte length prefix first
length_data = b''
while len(length_data) < 2:
chunk = sock.recv(2 - len(length_data))
if not chunk:
raise ConnectionError("Connection closed")
length_data += chunk
msg_length = struct.unpack('>H', length_data)[0]
# Read the full response
response = b''
while len(response) < msg_length:
chunk = sock.recv(msg_length - len(response))
if not chunk:
raise ConnectionError("Connection closed early")
response += chunk
return parse_dns_response(response, qtype)
finally:
sock.close()
Putting it all together, the correct approach for a robust DNS checker is a three-step process:
The UDP path handles 95%+ of queries with low latency. TCP fallback catches the rest. Together they handle every TXT record set I have thrown at them in production including apple.com, google.com, and several large enterprise domains with eight or more TXT records.
While debugging this I discovered a second problem unrelated to EDNS0. Two of the resolvers I initially included in the checker turned out to be unreliable.
OpenDNS timed out consistently during testing. OpenDNS is a content-filtering resolver and appears to rate limit or block certain traffic patterns, making it an unreliable choice for a diagnostic tool that needs consistent results.
Comodo Secure DNS was more problematic: it was returning incomplete TXT record sets for some domains, filtering records based on its content policy without indicating it was doing so. For a propagation checker this is worse than a timeout. A timeout tells you something went wrong. A silent partial result looks like real data and misleads anyone using the tool to verify their DNS configuration.
Both were dropped. The final resolver list in InfraTally's DNS checker is Google (8.8.8.8), Cloudflare (1.1.1.1), Level3 (4.2.2.1), Quad9 (9.9.9.9), Verisign (64.6.64.6), and Hurricane Electric (74.82.42.42). All six return consistent, unfiltered results for TXT records.
// The DNS checker described in this article is live at InfraTally.
No signup. No account. Enter any domain and get results from all six resolvers.
If you are building a DNS tool or debugging one that returns empty results on enterprise domains, check these things in order.
First, check the TC bit before reporting zero records. An empty result with TC=1 is a truncated response, not a missing record. Reporting it as "no records found" is wrong.
Second, add EDNS0 to your outgoing queries. The OPT record is 11 bytes and tells resolvers you can handle up to 4096 bytes over UDP. Most truncation problems disappear with this one change.
Third, implement TCP fallback for the cases EDNS0 does not cover. The code is more involved than UDP but not complex, and it makes your tool correct for 100% of cases rather than 95%.
Finally, be selective about which resolvers you include. Content-filtering resolvers like Comodo that silently modify results are worse than no resolver at all in a diagnostic tool. Partial results that look like real data cause more confusion than an honest error.