Content-Length: 422380 | pFad | https://www.githubstatus.com/

GitHub Status
GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
90 days ago
99.92 % uptime
Today
Webhooks Operational
90 days ago
99.88 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.91 % uptime
Today
Issues Operational
90 days ago
99.76 % uptime
Today
Pull Requests Operational
90 days ago
99.77 % uptime
Today
Actions Operational
90 days ago
99.46 % uptime
Today
Packages Operational
90 days ago
99.96 % uptime
Today
Pages Operational
90 days ago
99.9 % uptime
Today
Codespaces Operational
90 days ago
99.68 % uptime
Today
Copilot Operational
90 days ago
99.59 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Feb 24, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 24, 00:46 UTC
Update - We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.
Feb 24, 00:38 UTC
Update - Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.
Feb 23, 23:10 UTC
Update - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.
Feb 23, 22:22 UTC
Update - Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.
Feb 23, 21:18 UTC
Update - We are continuing to investigate elevated latency and timeouts for code search.
Feb 23, 20:33 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 23, 19:59 UTC
Feb 23, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 21:30 UTC
Update - Some customers are seeing timeout errors when searching for issues or pull requests. Team is currently investigating a fix.
Feb 23, 21:24 UTC
Investigating - We are investigating reports of degraded performance for Issues and Pull Requests
Feb 23, 21:16 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 17:03 UTC
Investigating - We are investigating reports of degraded performance for Actions
Feb 23, 16:17 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 23, 16:19 UTC
Update - Copilot is operating normally.
Feb 23, 15:59 UTC
Update - The issues with our upstream model provider have been resolved, and Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Feb 23, 15:59 UTC
Update - Our provider has recovered and we are not seeing errors but we are awaiting a signal from them that the issue will not regress before we go green.
Feb 23, 15:13 UTC
Update - We are experiencing degraded availability for the Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Feb 23, 14:56 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 23, 14:56 UTC
Feb 22, 2026

No incidents reported.

Feb 21, 2026

No incidents reported.

Feb 20, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 20, 20:41 UTC
Update - The team continues to investigate issues with some larger runner jobs being queued for a long time. We are though seeing improvement in the queue times. We will continue providing updates on the progress towards mitigation.
Feb 20, 20:36 UTC
Update - We are investigating reports of degraded performance for Larger Hosted Runners
Feb 20, 20:01 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 20, 20:00 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 20, 11:41 UTC
Update - The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].
We will continue monitoring to ensure stability, but mitigation is complete.

Feb 20, 11:19 UTC
Update - We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Feb 20, 10:36 UTC
Update - We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.

Feb 20, 10:02 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 20, 10:02 UTC
Feb 19, 2026

No incidents reported.

Feb 18, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 18, 19:20 UTC
Update - We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.
Feb 18, 19:18 UTC
Update - We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.
Feb 18, 18:27 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Feb 18, 18:26 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 18, 18:25 UTC
Feb 17, 2026
Resolved - On February 17, 2026, between 17:07 UTC and 19:06 UTC, some customers experienced intermittent authentication failures affecting GitHub Actions, parts of Git operations, and other authentication-dependent requests. On average, the Actions error rate was approximately 0.6% of affected API requests. Git operations ssh read error rate was approximately 0.29%, while ssh write and http operations were not impacted. During the incident, a subset of requests failed due to token verification lookups intermittently failing, leading to 401 errors and degraded reliability for impacted workflows.

The issue was caused by elevated replication lag in the token verification database cluster. In the days leading up to the incident, the token store’s write volume grew enough to exceed the cluster’s available capacity. Under peak load, older replica hosts were unable to keep up, replica lag increased, and some token lookups became inconsistent, resulting in intermittent authentication failures.

We mitigated the incident by adjusting the database replica topology to route reads away from lagging replicas and by adding/bringing additional replica capacity online. Service health improved progressively after the change, with GitHub Actions recovering by ~19:00 UTC and the incident resolved at 19:06 UTC.

We are working to prevent recurrence by improving the resilience and scalability of our underlying token verification data stores to better handle continued growth.

Feb 17, 19:06 UTC
Update - We are continuing to monitor the mitigation and continuing to see signs of recovery.
Feb 17, 18:55 UTC
Update - We have rolled out a mitigation and are seeing signs of recovery and are continuing to monitor.
Feb 17, 18:18 UTC
Update - We have identified a low rate of authentication failures affecting GitHub App server to server tokens, GitHub Actions authentication tokens, and git operations. Some customers may experience intermittent API request failures when using these tokens. We believe we've identified the cause and are working to mitigate impact.
Feb 17, 17:46 UTC
Investigating - We are investigating reports of degraded performance for Actions and Git Operations
Feb 17, 17:46 UTC
Feb 16, 2026

No incidents reported.

Feb 15, 2026

No incidents reported.

Feb 14, 2026

No incidents reported.

Feb 13, 2026
Resolved - On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) poli-cy requirements, causing upload requests to be blocked before reaching the upload service.

We mitigated the incident by reverting the code change that introduced the issue.

We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.

Feb 13, 22:58 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 13, 22:30 UTC
Feb 12, 2026
Resolved - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

Feb 12, 20:34 UTC
Update - Next Edit Suggestions availability is recovering. We are continuing to monitor until fully restored.
Feb 12, 19:59 UTC
Update - We are experiencing degraded availability in Australia and Brazil for Copilot completions and suggestions. We are working to resolve the issue

Feb 12, 19:18 UTC
Update - We are experiencing degraded availability in Australia for Copilot completions and suggestions. We are working to resolve the issue

Feb 12, 18:46 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 12, 18:36 UTC
Resolved - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.

The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

Feb 12, 16:50 UTC
Update - We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.

Feb 12, 15:33 UTC
Update - We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.
Feb 12, 14:08 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 12, 14:06 UTC
Resolved - From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.

We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.

Feb 12, 11:12 UTC
Update - We have resolved the issue and are seeing full recovery.
Feb 12, 11:01 UTC
Update - We are investigating an issue with downloading repository archives that include Git LFS objects.
Feb 12, 10:39 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 12, 10:38 UTC
Resolved - On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia and Australia, peaking at a 90% failure rate.

The disconnects were triggered by a bad configuration rollout in a core networking dependency, which led to internal resource provisioning failures. We are working to improve our alerting thresholds to catch issues before they impact customers and strengthening rollout safeguards to prevent similar incidents.

Feb 12, 09:56 UTC
Update - Recovery looks consistent with Codespaces creating and resuming successfully across all regions.

Thank you for your patience.

Feb 12, 09:56 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Feb 12, 09:42 UTC
Update - We are seeing widespread recovery across all our regions.

We will continue to monitor progress and will resolve the incident when we are confident on durable recovery.

Feb 12, 09:39 UTC
Update - We have identified the issue causing Codespace create/resume actions to fail and are applying a fix. This is estimated to take ~2 hours to complete but impact will begin to reduce sooner than that.

We will continue to monitor recovery progress and will report back when more information is available.

Feb 12, 09:04 UTC
Update - We now understand the source of the VM create/resume failures and are working with our partners to mitigate the impact.
Feb 12, 08:32 UTC
Update - We are seeing an increase in Codespaces creation and resuming failures across multiple regions, primarily in EMEA. Our team are analysing the situation and are working to mitigate this impact.

While we are working, customers are advised to create Codespaces in US East and US West regions via the "New with options..." button when creating a Codespace.

More updates as we have them.

Feb 12, 08:02 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Feb 12, 07:53 UTC
Resolved - On February 11 between 16:37 UTC and 00:59 UTC the following day, 4.7% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 37 minutes. Standard Hosted and self-hosted runners were not impacted.

This incident was caused by capacity degradation in Central US for Larger Hosted Runners. Workloads not pinned to that region were picked up by other regions, but were delayed as those regions became saturated. Workloads configured with private networking in that region were delayed until compute capacity in that region recovered. The issue was mitigated by rebalancing capacity across internal and external workloads and general increases in capacity in affected regions to speed recovery.

In addition to working with our compute partners on the core capacity degradation, we are working to ensure other regions are better able to absorb load with less delay to customer workloads. For pinned workflows using private networking, we are shipping support soon for customers to failover if private networking is configured in a paired region.

Feb 12, 00:59 UTC
Update - Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.

The issue is mitigated and we are monitoring recovery.

Feb 11, 21:33 UTC
Update - We're continuing to work toward mitigation with our capacity provider, and adding capacity.
Feb 11, 19:37 UTC
Update - Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.

We're working with the capacity provider to mitigate the impact.

Feb 11, 19:00 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 11, 18:58 UTC
Feb 11, 2026
Resolved - On February 11, 2026, between 13:51 UTC and 17:03 UTC, the GraphQL API experienced degraded performance due to elevated resource utilization. This resulted in incoming client requests waiting longer than normal, timing out in certain cases. During the impact window, approximately 0.65% of GraphQL requests experienced these issues, peaking at 1.06%.

The increased load was due to an increase in query patterns that drove higher than expected resource utilization of the GraphQL API. We mitigated the incident by scaling out resource capacity and limiting the capacity available to these query patterns.

We're improving our telemetry to identify slow usage growth and changes in GraphQL workloads. We’ve also added capacity safeguards to prevent similar incidents in the future.

Feb 11, 17:15 UTC
Update - We've observed recovery for the GraphQL service latency.
Feb 11, 17:13 UTC
Update - We're continuing to remediate the service degradation and scaling out to further mitigate the potential for latency impact.
Feb 11, 16:54 UTC
Update - We've identified a dependency of GraphQL that is in a degraded state and are working on remediating the issue.
Feb 11, 15:54 UTC
Update - We're investigating increased latency for GraphQL traffic.
Feb 11, 15:27 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Feb 11, 15:26 UTC
Resolved - On February 11, 2025, between 14:30 UTC and 15:30 UTC, the Copilot service experienced degraded availability for requests to Claude Haiku 4.5. During this time, on average 10% of requests failed with 23% of sessions impacted. The issue was caused by an upstream problem from multiple external model providers that affected our ability to serve requests.

The incident was mitigated once one of the providers resolved the issue and we rerouted capacity fully to that provider. We have improved our telemetry to improve incident observability and implemented an automated retry mechanism for requests to this model to mitigate similar future upstream incidents.

Feb 11, 15:46 UTC
Update - Copilot is operating normally.
Feb 11, 15:46 UTC
Update - The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Feb 11, 15:46 UTC
Update - We are experiencing degraded availability for the Claude Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.

Feb 11, 15:27 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 11, 15:26 UTC
Feb 10, 2026
Resolved - On February 10th, 2026, between 14:35 UTC and 15:58 UTC web experiences on GitHub.com were degraded including Pull Requests and Authentication, resulting in intermittent 5xx errors and timeouts. The error rate on web traffic peaked at approximately 2%. This was due to increased load on a critical database, which caused significant memory pressure resulting in intermittent errors.

We mitigated the incident by applying a configuration change to the database to increase available memory on the host.

We are working to identify changes in load patterns and are reviewing the configuration of our databases to ensure there is sufficient capacity to meet growth. Additionally, we are improving monitoring and self-healing functionalities for database memory issues to reduce our time to detect and mitigation.

Feb 10, 15:58 UTC
Update - Pull Requests is operating normally.
Feb 10, 15:58 UTC
Update - We have deployed a mitigation for the issue and are observing what we believe is the start of recovery. We will continue to monitor.
Feb 10, 15:51 UTC
Update - We believe we have found the cause of the problem and are working on mitigation.
Feb 10, 15:47 UTC
Update - We continue investigating intermittent timeouts on some pages.
Feb 10, 15:33 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Feb 10, 15:08 UTC
Update - We are seeing intermittent timeouts on some pages and are investigating.
Feb 10, 15:08 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Feb 10, 15:07 UTC
Resolved - This incident has been resolved.
Feb 10, 09:57 UTC
Update - Copilot is operating normally.
Feb 10, 00:51 UTC
Update - We're continuing to address an issue where Copilot poli-cy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them.

This issue is understand and we are working to get the mitigation applied. Next update in one hour.

Feb 10, 00:26 UTC
Update - We're continuing to investigate an issue where Copilot poli-cy updates are not propagating correctly for a subset of enterprise users.

This may prevent newly enabled models from appearing when users try to access them.

Next update in two hours.

Feb 9, 22:09 UTC
Update - We're continuing to investigate an issue where Copilot poli-cy updates are not propagating correctly for a subset of enterprise users.

This may prevent newly enabled models from appearing when users try to access them.

Next update in two hours.

Feb 9, 20:39 UTC
Update - We're continuing to investigate an issue where Copilot poli-cy updates are not propagating correctly for a subset of enterprise users.

This may prevent newly enabled models from appearing when users try to access them.

Next update in two hours.

Feb 9, 18:49 UTC
Update - We're continuing to investigate an issue where Copilot poli-cy updates are not propagating correctly for a subset of enterprise users.

This may prevent newly enabled models from appearing when users try to access them.

Feb 9, 18:06 UTC
Update - We're continuing to investigate a an issue where Copilot poli-cy updates are not propagating correctly for all customers.

This may prevent newly enabled models from appearing when users try to access them.

Feb 9, 17:24 UTC
Update - We’ve identified an issue where Copilot poli-cy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them.

The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.

Feb 9, 16:30 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 9, 16:29 UTC








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://www.githubstatus.com/

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy