Microsoft’s Visual Studio Code 1.116 is less a routine point release than a signal flare about where the editor is headed next. The headline change is simple but important: GitHub Copilot Chat is now built in by default, so new users no longer have to discover, install, and configure an extension before they can start using AI assistance. But the more revealing story is what sits beside that change — a new Agent Debug Log panel, deeper terminal integration, and a series of workflow tweaks that make VS Code feel increasingly like an observability layer for AI coding agents, not just a text editor with a chatbot bolted on.
The shift to a built-in Copilot experience did not happen in isolation. Over the past year, Microsoft has been steadily collapsing separate AI surfaces into a more unified editor experience, including work to merge Copilot capabilities into a single extension and to open-source pieces of the stack. In late 2025, the VS Code team said it was working toward “one extension, same user experience,” and explained that the old split between the Copilot and Copilot Chat extensions was temporary while the platform migrated toward a single AI surface. That context matters because 1.116 is not a random feature dump; it is the public culmination of a deliberate product strategy.
The release also lands at a moment when AI-assisted development is moving from novelty to infrastructure. Microsoft’s own release notes frame the change as part of making VS Code “the open source AI code editor,” which is a subtle but powerful positioning statement. It suggests Microsoft does not want Copilot to feel like an add-on service anymore; it wants AI to be perceived as part of the editor’s baseline value proposition.
That move has practical implications for onboarding, enterprise standardization, and developer retention. When AI is built in, there is one fewer decision point for a new user, one fewer admin task for IT, and one fewer excuse for a competitor to win a developer’s first impression. In a market where AI coding tools are increasingly differentiated by friction, those savings matter almost as much as raw model quality.
It also changes the competitive temperature. Cursor, which has been pushing an agent-first interface and an always-on command console, has made code editors feel like a battleground over workflow primacy rather than syntax highlighting. Microsoft’s answer in 1.116 is not to abandon the editor model, but to make the editor smarter, more instrumented, and more agent-aware. That is a classic Microsoft move: absorb the disruptive idea, then turn it into a platform feature.
The architecture change is more than branding. The old model asked users to understand that AI code completion and AI chat were separate products, even if both were sold under the Copilot umbrella. The new model reduces that cognitive overhead and makes Copilot feel like one coherent assistant inside VS Code. That is a meaningful simplification for new users, especially those who do not want to hunt through the marketplace just to get started.
At the same time, the AI editor market has matured in ways that force vendors to choose their emphasis. Some tools are leaning into agent orchestration, foregrounding sessions, tasks, and command automation as the primary interface. Others, including Microsoft here, are doubling down on the familiar editor while threading agent capabilities through it. The difference sounds subtle, but it is strategic: one path asks developers to adapt to a new console, while the other asks them to keep coding as usual and let the agent adapt to their workflow.
That tension explains why observability features suddenly matter so much. Once agents can write code, run commands, and interact with terminals, developers need visibility into what the system did and why it did it. The new Agent Debug Log panel, the built-in terminal notifications, and the improved session tooling all point to the same conclusion: Microsoft understands that AI features become enterprise-grade only when they are inspectable, reproducible, and governable.
This is an onboarding decision disguised as a feature announcement. Microsoft is lowering the activation energy required to try AI coding assistance, and that is critical when the market is crowded with tools promising to make developers faster. The first five minutes matter, and the company clearly wants those minutes to include an immediate Copilot experience rather than a blank editor and a marketplace search box.
There is also a subtle competitive advantage here. Competing editors often rely on users to assemble their own AI stack, or they require a separate decision to commit to an AI-native workflow. Microsoft is using its distribution advantage to make the “right” choice feel like the easiest choice. That is quietly ruthless and very effective.
Key implications include:
That changes the AI developer experience from speculative to auditable. Instead of treating an agent session as a black box, developers can now reconstruct the chain of events that led to a command, a prompt, or a tool call. In practice, that helps with prompt tuning, custom agent configuration, and root-cause analysis when the model misunderstands the workspace or makes an unwanted decision.
That matters especially for enterprises, where the question is rarely “Can the model code?” and more often “Can we explain what it did?” An auditable session history helps procurement, compliance, and internal platform teams validate the tool before broad rollout. In that sense, the debug log is not just a developer convenience; it is part of the control plane.
The feature also pairs well with the release’s other visibility improvements. By exposing diffs directly in chat and streamlining session state, VS Code is making agent activity feel legible inside the normal workflow rather than hidden in side channels. That is a stronger design pattern than asking developers to switch contexts repeatedly.
By eliminating that extra detection pass, VS Code is effectively reducing latency and token usage while making terminal interactions more deterministic. The agent now handles terminal input directly, which should make the system feel more responsive and less chatty under the hood. For users, the difference will show up as less waiting and fewer unnecessary model calls.
This is a real productivity gain because many developer tasks are not clean, one-shot commands. They involve installers, shells, REPLs, authentication prompts, and interactive debugging sessions. Support for foreground terminals is the difference between “helpful in demos” and “useful on actual projects.”
The release also enables background terminal notifications by default, so the agent can receive alerts when commands finish, time out, or need input. That reduces polling and makes long-running tasks easier to coordinate. In a world of autonomous workflows, fewer blind spots mean fewer stalled sessions.
The release also improves chat rendering performance, including reduced layout thrashing and more efficient incremental updates while responses stream. These are the kinds of fixes users notice less consciously than flashy features, but they often matter more in sustained use. A responsive chat surface feels trustworthy; a laggy one feels fragile.
There is also a new Chat Customizations welcome page that helps developers draft agent configurations from natural language descriptions. Combined with a built-in JS/TS Chat Features extension, the editor is making it easier to tune Copilot for specific projects without requiring deep configuration work up front. The direction of travel is clear: reduce setup friction, then add visibility and control once the workflow is in motion.
Useful takeaways:
That matters because the comparison is not really “VS Code versus Cursor” in the old sense of editor versus editor. It is now editor-centric AI versus agent-centric AI. Cursor’s advantage is that it can make the agent feel like the main character. Microsoft’s advantage is that it already owns the place where developers spend most of their coding time.
That approach is especially strong in enterprise environments, where familiarity is a strategic asset. Companies that have standardized on VS Code may be reluctant to retrain developers on an entirely new agent console if the same AI outcomes can be reached inside the existing editor. Microsoft knows this, which is why the release emphasizes continuity as much as innovation.
Still, Microsoft should not assume the battle is settled. Agent-first tools can move faster on UX experimentation, and they can frame the entire product around orchestration rather than editing. If those tools continue to grow, VS Code will need to keep proving that staying in the editor is not a compromise.
That makes VS Code easier to justify in environments with compliance requirements, security reviews, and internal governance controls. Teams can assess how the agent behaves, what it touched, and how it got to a given state. That is exactly the kind of evidence procurement teams want before allowing broad use of generative AI in engineering workflows.
Enterprises will likely welcome the fact that Microsoft is packaging AI in a way that preserves policy levers and platform consistency. The built-in extension model also reduces dependency sprawl, which can be a major administrative headache in large organizations. This is one area where Microsoft’s scale and procurement muscle likely still matter a great deal.
That strategy has been unfolding through deliberate milestones. Open-sourcing Copilot Chat was step one. Unifying the extension surface was step two. Built-in Copilot is step three, and it does something elegant: it turns what used to be a separate opt-in extension into part of the editor’s identity. That is a cleaner story for users and a stronger moat for Microsoft.
That kind of framing can shape developer behavior in subtle ways. A feature that ships by default gets tested more, discussed more, and normalized faster. Over time, that can influence not only adoption but also what the market considers standard in a modern editor.
There is, however, a balancing act. If Microsoft pushes AI too aggressively, it risks alienating users who prefer a classic editor experience. The company is trying to mitigate that by keeping AI opt-out settings available, but the broader product direction is unmistakable. VS Code is becoming an AI-first platform whether every user wants to think of it that way or not.
It also gives Microsoft a clearer enterprise story at exactly the right moment. The company can now point to built-in AI, local logging, and terminal control as evidence that VS Code is not just experimenting with agents but operationalizing them. That creates room for broader deployment in organizations that would never approve a black-box AI assistant.
There is also the risk of feature complexity. As AI interactions expand into terminal control, session logging, customization pages, and multi-step approvals, the experience can become more powerful but also more cognitively demanding. If the controls become too fragmented, the simplicity that built-in Copilot was meant to deliver could erode. That would be a frustrating outcome for users who just want help writing code.
The other watchpoint is how aggressively Microsoft continues to collapse the distinction between “editor features” and “AI features.” If the company keeps embedding Copilot deeper into the default VS Code experience, then the editor could become the reference implementation for mainstream AI-assisted development. If it does not, more specialized tools will keep claiming the innovation lead.
Developers should watch several concrete signals in the coming releases:
Source: WinBuzzer https://winbuzzer.com/2026/04/17/microsoft-vs-code-1-116-copilot-built-in-agent-debug-xcxwbn/
Overview
The shift to a built-in Copilot experience did not happen in isolation. Over the past year, Microsoft has been steadily collapsing separate AI surfaces into a more unified editor experience, including work to merge Copilot capabilities into a single extension and to open-source pieces of the stack. In late 2025, the VS Code team said it was working toward “one extension, same user experience,” and explained that the old split between the Copilot and Copilot Chat extensions was temporary while the platform migrated toward a single AI surface. That context matters because 1.116 is not a random feature dump; it is the public culmination of a deliberate product strategy.The release also lands at a moment when AI-assisted development is moving from novelty to infrastructure. Microsoft’s own release notes frame the change as part of making VS Code “the open source AI code editor,” which is a subtle but powerful positioning statement. It suggests Microsoft does not want Copilot to feel like an add-on service anymore; it wants AI to be perceived as part of the editor’s baseline value proposition.
That move has practical implications for onboarding, enterprise standardization, and developer retention. When AI is built in, there is one fewer decision point for a new user, one fewer admin task for IT, and one fewer excuse for a competitor to win a developer’s first impression. In a market where AI coding tools are increasingly differentiated by friction, those savings matter almost as much as raw model quality.
It also changes the competitive temperature. Cursor, which has been pushing an agent-first interface and an always-on command console, has made code editors feel like a battleground over workflow primacy rather than syntax highlighting. Microsoft’s answer in 1.116 is not to abandon the editor model, but to make the editor smarter, more instrumented, and more agent-aware. That is a classic Microsoft move: absorb the disruptive idea, then turn it into a platform feature.
Background
The Copilot story inside VS Code has been evolving for years, but 2025 was the inflection point. Microsoft began consolidating the AI stack, first by open-sourcing the Copilot Chat extension and then by porting more functionality into a unified experience. By November 2025, the team had already described inline suggestions and chat as part of the same trajectory, with the Copilot extension slated for deprecation and the chat extension becoming the central surface. Version 1.116 is what that trajectory looks like when it finally stops sounding theoretical and starts shipping as the default.The architecture change is more than branding. The old model asked users to understand that AI code completion and AI chat were separate products, even if both were sold under the Copilot umbrella. The new model reduces that cognitive overhead and makes Copilot feel like one coherent assistant inside VS Code. That is a meaningful simplification for new users, especially those who do not want to hunt through the marketplace just to get started.
At the same time, the AI editor market has matured in ways that force vendors to choose their emphasis. Some tools are leaning into agent orchestration, foregrounding sessions, tasks, and command automation as the primary interface. Others, including Microsoft here, are doubling down on the familiar editor while threading agent capabilities through it. The difference sounds subtle, but it is strategic: one path asks developers to adapt to a new console, while the other asks them to keep coding as usual and let the agent adapt to their workflow.
That tension explains why observability features suddenly matter so much. Once agents can write code, run commands, and interact with terminals, developers need visibility into what the system did and why it did it. The new Agent Debug Log panel, the built-in terminal notifications, and the improved session tooling all point to the same conclusion: Microsoft understands that AI features become enterprise-grade only when they are inspectable, reproducible, and governable.
Built-in Copilot as a Default Experience
The biggest user-facing change in 1.116 is also the easiest to understand: Copilot Chat is now built into VS Code for new users. That means a fresh installation includes chat, inline suggestions, and agent workflows without an extension hunt or marketplace dependency. Existing users are largely unaffected, which is a smart transition strategy because it avoids breaking established setups while still changing the default path for everyone else.This is an onboarding decision disguised as a feature announcement. Microsoft is lowering the activation energy required to try AI coding assistance, and that is critical when the market is crowded with tools promising to make developers faster. The first five minutes matter, and the company clearly wants those minutes to include an immediate Copilot experience rather than a blank editor and a marketplace search box.
Why the Default Matters
A built-in feature is not just more convenient; it is a statement of product priority. By making Copilot part of the standard install, Microsoft is saying that AI is no longer optional decoration but a core layer of the IDE. That helps with consumer adoption, but it also simplifies enterprise rollouts because administrators can standardize on a known baseline instead of managing an extension dependency graph.There is also a subtle competitive advantage here. Competing editors often rely on users to assemble their own AI stack, or they require a separate decision to commit to an AI-native workflow. Microsoft is using its distribution advantage to make the “right” choice feel like the easiest choice. That is quietly ruthless and very effective.
Key implications include:
- Faster time-to-value for new users.
- Less setup friction for teams rolling out AI development tools.
- Stronger default adoption of Copilot features.
- More cohesive UX across chat, inline suggestions, and agents.
- Better platform control over future AI feature delivery.
Agent Debugging and Session Visibility
The new Agent Debug Log panel may be the most important feature in the release for serious teams, even if it is less flashy than the built-in Copilot announcement. It gives developers a way to inspect previous agent sessions, review what the system did, and diagnose why a particular interaction went off the rails. The logs are persisted locally, which is valuable both for debugging and for privacy-conscious workflows that do not want every trace sent off-machine by default.That changes the AI developer experience from speculative to auditable. Instead of treating an agent session as a black box, developers can now reconstruct the chain of events that led to a command, a prompt, or a tool call. In practice, that helps with prompt tuning, custom agent configuration, and root-cause analysis when the model misunderstands the workspace or makes an unwanted decision.
From Black Box to Audit Trail
The broader significance is that Microsoft is acknowledging a hard truth about agentic workflows: trust is not created by cleverness alone. Teams need evidence, not just output. A debug log turns agent behavior into something engineers can reason about, which is essential if AI is going to be allowed anywhere near production code, security-sensitive tasks, or regulated environments.That matters especially for enterprises, where the question is rarely “Can the model code?” and more often “Can we explain what it did?” An auditable session history helps procurement, compliance, and internal platform teams validate the tool before broad rollout. In that sense, the debug log is not just a developer convenience; it is part of the control plane.
The feature also pairs well with the release’s other visibility improvements. By exposing diffs directly in chat and streamlining session state, VS Code is making agent activity feel legible inside the normal workflow rather than hidden in side channels. That is a stronger design pattern than asking developers to switch contexts repeatedly.
Terminal Overhaul and Agent Input Handling
The terminal changes in 1.116 are deceptively technical, but they are central to the product’s AI ambitions. Microsoft removed an LLM-based prompt-for-input detection step that had previously introduced extra inference calls every time terminal output arrived. That is a classic hidden-cost problem: a feature that looks trivial in UX terms can become expensive and slow once it is scaled across every command and output chunk.By eliminating that extra detection pass, VS Code is effectively reducing latency and token usage while making terminal interactions more deterministic. The agent now handles terminal input directly, which should make the system feel more responsive and less chatty under the hood. For users, the difference will show up as less waiting and fewer unnecessary model calls.
Foreground Terminals Become First-Class
One of the most important updates is that agent tools can now interact with foreground terminals, not just agent-created background terminals. That means the agent can read and write to a running REPL or interactive script already visible in the terminal panel. In practical terms, this expands the agent’s reach from isolated automation into the messy, stateful reality of real developer workflows.This is a real productivity gain because many developer tasks are not clean, one-shot commands. They involve installers, shells, REPLs, authentication prompts, and interactive debugging sessions. Support for foreground terminals is the difference between “helpful in demos” and “useful on actual projects.”
The release also enables background terminal notifications by default, so the agent can receive alerts when commands finish, time out, or need input. That reduces polling and makes long-running tasks easier to coordinate. In a world of autonomous workflows, fewer blind spots mean fewer stalled sessions.
- Lower inference overhead from removed detection calls.
- Better handling of interactive scripts and REPLs.
- More reliable command orchestration in real projects.
- Faster notification loops for background jobs.
- Improved efficiency for long-running sessions.
Chat UX, Diffs, and Performance
Microsoft is also tightening the day-to-day chat experience in ways that make the agent feel less like a sidecar and more like a primary workspace. Code diffs now render directly inside the chat conversation, which reduces context switching and makes it easier to review proposed changes in place. That is a small-seeming interface move with outsized workflow value, because every avoided click reduces friction during rapid iteration.The release also improves chat rendering performance, including reduced layout thrashing and more efficient incremental updates while responses stream. These are the kinds of fixes users notice less consciously than flashy features, but they often matter more in sustained use. A responsive chat surface feels trustworthy; a laggy one feels fragile.
Smaller UX Changes, Bigger Workflow Effects
The new tool confirmation carousel, especially in Insiders, is another example of Microsoft refining the agent control loop. When multiple tools are being invoked in sequence, the carousel offers compact navigation for reviewing and approving actions without burying the user in modal fatigue. That is a sensible response to a growing class of agent interactions that are more complex than a single prompt-response cycle.There is also a new Chat Customizations welcome page that helps developers draft agent configurations from natural language descriptions. Combined with a built-in JS/TS Chat Features extension, the editor is making it easier to tune Copilot for specific projects without requiring deep configuration work up front. The direction of travel is clear: reduce setup friction, then add visibility and control once the workflow is in motion.
Useful takeaways:
- Inline diffs shorten review loops.
- Better streaming performance makes chat feel smoother.
- Tool confirmation UX reduces approval overhead.
- Natural-language customization lowers the barrier to agent setup.
- Project-specific JS/TS support improves practical utility.
Competitive Pressure from Cursor and Agent-First Editors
Microsoft’s move lands under real competitive pressure. Cursor has been making the case that the IDE itself should revolve around agent management, not traditional code editing, and its product direction has clearly influenced the market conversation. Even the public reporting around Cursor’s growth has underscored how quickly developers are adopting AI-native workflows and how aggressively the category is being monetized.That matters because the comparison is not really “VS Code versus Cursor” in the old sense of editor versus editor. It is now editor-centric AI versus agent-centric AI. Cursor’s advantage is that it can make the agent feel like the main character. Microsoft’s advantage is that it already owns the place where developers spend most of their coding time.
Microsoft’s Counterstrategy
VS Code 1.116 is a very Microsoft-style counterstrike: do not concede the interface, just expand the editor until it can absorb the new paradigm. Built-in Copilot reduces friction. Agent logging reduces uncertainty. Terminal upgrades reduce the difference between local work and agent work. The result is a product that stays familiar while quietly becoming more capable.That approach is especially strong in enterprise environments, where familiarity is a strategic asset. Companies that have standardized on VS Code may be reluctant to retrain developers on an entirely new agent console if the same AI outcomes can be reached inside the existing editor. Microsoft knows this, which is why the release emphasizes continuity as much as innovation.
Still, Microsoft should not assume the battle is settled. Agent-first tools can move faster on UX experimentation, and they can frame the entire product around orchestration rather than editing. If those tools continue to grow, VS Code will need to keep proving that staying in the editor is not a compromise.
Enterprise Implications and Governance
The enterprise significance of 1.116 is easy to underestimate if you focus only on the Copilot headline. Built-in AI lowers onboarding friction, but the more important enterprise question is whether the system is controllable. Microsoft’s answer is increasingly yes: logs are available, sessions are inspectable, terminal behavior is more predictable, and AI features can be disabled if needed.That makes VS Code easier to justify in environments with compliance requirements, security reviews, and internal governance controls. Teams can assess how the agent behaves, what it touched, and how it got to a given state. That is exactly the kind of evidence procurement teams want before allowing broad use of generative AI in engineering workflows.
Why Governance Is Now a Product Feature
The most interesting part of this trend is that governance is no longer a back-office concern. It is becoming a feature users can feel directly in the product. The more AI writes code and drives terminals, the more valuable it becomes to have explicit auditability and clear interaction surfaces. That is what makes the debug log and terminal controls strategically important, not just technically neat.Enterprises will likely welcome the fact that Microsoft is packaging AI in a way that preserves policy levers and platform consistency. The built-in extension model also reduces dependency sprawl, which can be a major administrative headache in large organizations. This is one area where Microsoft’s scale and procurement muscle likely still matter a great deal.
- Easier compliance review through local session logs.
- More predictable terminal automation for controlled environments.
- Lower deployment friction for standardized developer images.
- Clearer AI feature boundaries for policy teams.
- Better fit for regulated industries that need traceability.
Product Strategy and the Open Source AI Editor Vision
Microsoft’s language around 1.116 is revealing. The company is not merely “adding AI features” but positioning VS Code as the open source AI code editor. That phrasing suggests a long-term strategy in which open-source credibility, AI functionality, and broad developer adoption reinforce one another rather than compete.That strategy has been unfolding through deliberate milestones. Open-sourcing Copilot Chat was step one. Unifying the extension surface was step two. Built-in Copilot is step three, and it does something elegant: it turns what used to be a separate opt-in extension into part of the editor’s identity. That is a cleaner story for users and a stronger moat for Microsoft.
The Meaning of “Built-In”
“Built-in” is not just a packaging choice. It is an architectural and psychological signal. Architecturally, it reduces the number of moving parts in the setup path. Psychologically, it tells users that AI is expected, supported, and central rather than experimental or secondary.That kind of framing can shape developer behavior in subtle ways. A feature that ships by default gets tested more, discussed more, and normalized faster. Over time, that can influence not only adoption but also what the market considers standard in a modern editor.
There is, however, a balancing act. If Microsoft pushes AI too aggressively, it risks alienating users who prefer a classic editor experience. The company is trying to mitigate that by keeping AI opt-out settings available, but the broader product direction is unmistakable. VS Code is becoming an AI-first platform whether every user wants to think of it that way or not.
Strengths and Opportunities
The release’s biggest strength is that it combines lower friction, better observability, and deeper workflow integration in one coherent package. That matters because successful AI tools need more than clever models; they need repeatable, inspectable, low-friction workflows that fit the way developers already work. VS Code 1.116 checks those boxes more convincingly than many competitors.It also gives Microsoft a clearer enterprise story at exactly the right moment. The company can now point to built-in AI, local logging, and terminal control as evidence that VS Code is not just experimenting with agents but operationalizing them. That creates room for broader deployment in organizations that would never approve a black-box AI assistant.
- Built-in onboarding makes AI easier to try.
- Local logs improve transparency and debugging.
- Foreground terminal support broadens real-world usefulness.
- Reduced latency improves the feel of agent sessions.
- Inline diffs speed up review and iteration.
- Enterprise alignment strengthens procurement confidence.
- Open-source positioning reinforces community trust.
Risks and Concerns
The biggest risk is that Microsoft may be solving the wrong problem for some users. If the market continues to tilt toward dedicated agent consoles, then making the editor better may not be enough to win the next wave of workflows. The company is betting that most developers still want to live in the code editor first and the agent console second, and that bet may not hold forever.There is also the risk of feature complexity. As AI interactions expand into terminal control, session logging, customization pages, and multi-step approvals, the experience can become more powerful but also more cognitively demanding. If the controls become too fragmented, the simplicity that built-in Copilot was meant to deliver could erode. That would be a frustrating outcome for users who just want help writing code.
- Agent sprawl could make the UI harder to learn.
- Enterprise policy gaps may appear as usage expands.
- Terminal automation mistakes can create destructive side effects.
- Model latency and cost still matter even after optimization.
- User trust may suffer if agent actions feel opaque.
- Competitors may still out-innovate on agent-first UX.
- Opt-out pressure could rise among privacy-conscious users.
Looking Ahead
The next phase will likely be about refinement rather than reinvention. Microsoft has already shown that it wants to make agents more inspectable, more terminal-aware, and more central to the editor experience. The obvious follow-up is to keep tightening the loop between what the agent plans, what it executes, and what the user can review afterward.The other watchpoint is how aggressively Microsoft continues to collapse the distinction between “editor features” and “AI features.” If the company keeps embedding Copilot deeper into the default VS Code experience, then the editor could become the reference implementation for mainstream AI-assisted development. If it does not, more specialized tools will keep claiming the innovation lead.
Developers should watch several concrete signals in the coming releases:
- Whether agent logging expands into richer trace and replay tooling.
- How far terminal automation is extended into more interactive workflows.
- Whether built-in Copilot changes adoption among new VS Code installs.
- How quickly Microsoft responds to agent-first competitors.
- Whether enterprise controls become more granular and policy-friendly.
- How much the UX simplifies as features accumulate.
Source: WinBuzzer https://winbuzzer.com/2026/04/17/microsoft-vs-code-1-116-copilot-built-in-agent-debug-xcxwbn/
Similar threads
- Featured
- Article
- Replies
- 0
- Views
- 417
- Replies
- 0
- Views
- 527
- Replies
- 0
- Views
- 59
- Replies
- 0
- Views
- 32
- Replies
- 3
- Views
- 104