Listen to the article

Deep research felt genuinely helpful for the first time, but it also felt a little… careless. A recognizable tension would arise as you watched the tool scour the internet, gathering information quickly: the relief of automation combined with the worry that it might come back with some dubious mementos.
A passage from a blog that has been forgotten. A figure from a page that appears to have been created in 2009. Something vaguely grounded but sounding plausible. Fascination followed by a silent need to confirm is the emotional backdrop for AI research in 2026.
| Category | Details |
|---|---|
| Product | ChatGPT “Deep research” (agentic, multi-step research tool) |
| Company | OpenAI |
| New capability | Users can manage “Sites → Manage sites” and restrict research to specific websites/domains, or prioritize selected sites while still searching the broader web |
| Connected sources | Deep research can use connected apps and uploaded files as trusted sources, alongside the web |
| Workflow upgrades | Proposed research plan before it begins; real-time progress tracking; ability to interrupt and refine mid-run (including adjusting sources) |
| Report experience | Full-screen document-style report view with navigation (table of contents + sources), plus downloadable formats |
| Authentic reference | OpenAI Help Center: “Deep research FAQ” |
This discomfort is the direct focus of OpenAI’s most recent update to ChatGPT’s deep research. Now, users can manage sources from the prompt window (“Sites → Manage sites”), effectively telling the tool which sites or domains to use. You can select an option that prioritizes the websites you enter while allowing full-web search, or you can limit your research to just those sites. The philosophy has changed from “trust me, I searched” to “here’s the universe you’re allowed to search,” with a minor interface change.
This is merely a settings panel on paper. In actuality, it serves as a kind of governance, particularly for those whose work is limited to a select few sources. regulators. organizations that set standards. first-party records. peer-reviewed publications. It’s difficult to ignore how frequently boundaries determine credibility.
The majority of people don’t read “the internet.” They only read from a few sources they’ve come to trust, and until the opposite is demonstrated, they disregard everything else. Deep research on that very human habit is encouraged by this update.
According to OpenAI’s own documentation, it offers flexibility and control. You can instruct it to treat your selected sources as primary while it is still exploring, or you can lock deep research to particular websites. Because it acknowledges a messy truth, the second option is the more intriguing one.
You don’t want to miss the best source, which can occasionally be found outside of the trusted list—a new report, a niche dataset, or an obscure PDF. However, you also don’t want your main points to be based on a tenuous foundation. The compromise of the “prioritize but allow” setting seems to be OpenAI’s acknowledgment that the majority of research is compromise.
Connected sources are an additional layer that companies have been subtly requesting. The open web is no longer the only source of deep research. According to OpenAI, you have control over which sources it can use, such as linked apps, uploaded files, and webpages.
This is significant because genuine work rarely exists in public articles by themselves. It can be found in product documents, internal memos, customer tickets, or PDFs on a shared drive. Deep research gets closer to how analysts really work when it is allowed to draw from those sources while still properly citing and organizing the results.
The experience of reading the output is also altered by the update, which may not seem like much until you’ve attempted to parse a lengthy AI-generated report in a chat window.
A new full-screen report viewer with a source list and table of contents, as well as downloads in Word, PDF, and Markdown formats, is described by The Verge. It’s the difference between opening a document that functions like a document and getting a dense email. easier to scan. easier to navigate. Less of the never-ending scrolling that gives you the impression that you’re digging a trench.
Real-time control is the next feature that seems the most “product grown-up.” According to OpenAI, you can monitor its progress while it’s operating and pause it to narrow down the focus, including changing the sources it can use. The fact that research is rarely linear is acknowledged. You ask one question at first, but halfway through you discover that it’s a slightly different one.
In the previous model, you would waste time and patience waiting for the run to finish before rerunning it with improved instructions. You can now pause the tool in the middle of a sentence because it acts more like a collaborator.
Since trust isn’t just about where information comes from, it’s still unclear if source selection will “solve” the trust issue people have with AI research. It also depends on how it is interpreted, which claims are highlighted, and whether the tool subtly eliminates ambiguity in the interest of coherence.
However, selecting sources does alter the work’s tone. You get the impression that the tool is more like a junior analyst who knows which bookshelf to start with than a wandering intern when you watch deep research work within the parameters you set.
Additionally, there is a subtext of competition. The lesson being learned by all AI companies developing research agents is that while autonomy is impressive, professionals are drawn to controllability.
People want levers as soon as a tool is incorporated into a workflow, particularly in regulated industries, finance, or policy. They desire auditability. “Use these sources,” they want to say, and then observe precisely what transpired. That’s what OpenAI is relying on, and it’s probably smart.
The update might appear to be a specialized feature for analysts, academics, and overly serious spreadsheet users to regular users. However, it strikes a deeper cultural chord: the anxiety of being boldly duped by a sophisticated synopsis. While source selection does not remove that fear, it does provide users with a means of responding to it. And that could be the most significant “AI improvement” of all, quietly and practically.


