33 Commits

Author SHA1 Message Date
Sline
fd98caccd2
revert: use-app-data (#6088)
* Revert "refactor(app-data): split monolithic context into focused SWR hooks (#5576)"

This reverts commit 8e8182f70793f44e76ca3bb71eb98d6d60701e13.

# Conflicts:
#	src/components/home/clash-info-card.tsx
#	src/components/home/clash-mode-card.tsx
#	src/components/home/current-proxy-card.tsx
#	src/components/home/home-profile-card.tsx
#	src/components/proxy/provider-button.tsx
#	src/components/proxy/proxy-chain.tsx
#	src/components/proxy/proxy-groups.tsx
#	src/components/proxy/use-render-list.ts
#	src/components/rule/provider-button.tsx
#	src/components/setting/mods/sysproxy-viewer.tsx
#	src/hooks/use-clash-data.ts
#	src/hooks/use-current-proxy.ts
#	src/hooks/use-shared-swr-poller.ts
#	src/hooks/use-system-proxy-state.ts
#	src/pages/rules.tsx

* docs: Changelog.md
2026-01-16 18:32:31 +08:00
Sline
b4e25951b4
feat(proxy): unify filter search box and sync filter options (#5819)
* feat(proxy): unify filter search box and sync filter options

* docs: fix JSDoc placement

* refactor(search): unify string matcher and stabilize regex filtering

* fix(search): treat invalid regex as matching nothing

* docs: update changelog to include advanced filter search feature for proxy page

---------

Co-authored-by: Tunglies <77394545+Tunglies@users.noreply.github.com>
2025-12-19 17:12:36 +08:00
Slinetrac
3cf51de850
refactor: reorganize component and hook structure; remove unused modules 2025-12-04 12:51:22 +08:00
Sline
caca8b2bc4
refactor: debugLog for frontend (#5587) 2025-11-25 10:01:04 +08:00
Sline
8e8182f707
refactor(app-data): split monolithic context into focused SWR hooks (#5576)
* refactor(app-data): split monolithic context into focused SWR hooks

* refactor(swr): unify polling and consolidate proxy/config/provider data flow
2025-11-24 16:18:31 +08:00
Tunglies
a869dbb441
Revert "refactor: profile switch (#5197)"
This reverts commit c2dcd867228aae3f2fa0855226244cc701ba4f43.
2025-10-30 18:11:04 +08:00
Sline
c2dcd86722
refactor: profile switch (#5197)
* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
Slinetrac
c63584daca
fix: timeout sort 2025-10-17 14:51:33 +08:00
Tunglies
1176f8c863
feat: refactor app data provider and context for improved data management and performance 2025-10-04 21:20:31 +08:00
Junkai W.
909c4028f1
添加链式代理下规则适配 2025-09-22 18:16:17 +08:00
Tunglies
e414b49879
Refactor imports across multiple components for consistency and clarity
- Reorganized import statements in various components to ensure consistent ordering and grouping.
- Removed unnecessary imports and added missing ones where applicable.
- Improved readability and maintainability of the codebase by standardizing import styles.
2025-09-19 00:01:04 +08:00
Tunglies
627119bb22
Refactor imports and improve code organization across multiple components and hooks
- Consolidated and reordered imports in various files for better readability and maintainability.
- Removed unused imports and ensured consistent import styles.
- Enhanced the structure of components by grouping related imports together.
- Updated the layout and organization of hooks to streamline functionality.
- Improved the overall code quality by following best practices in import management.
2025-09-18 23:34:38 +08:00
Junkai W.
f2073a2f83
Add Func 链式代理 (#4624)
* 添加链式代理gui和语言支持
在Iruntime中添跟新链式代理配置方法
同时添加了cmd

* 修复读取运行时代理链配置文件bug

* t

* 完成链式代理配置构造

* 修复获取链式代理运行时配置的bug

* 完整的链式代理功能
2025-09-15 07:44:54 +08:00
Sukka
954ff53d9b
refactor: use React in its intended way (#3963)
* refactor: replace `useEffect` w/ `useLocalStorage`

* refactor: replace `useEffect` w/ `useSWR`

* refactor: replace `useEffect` and `useSWR`. clean up `useRef`

* refactor: use `requestIdleCallback`

* refactor: replace `useEffect` w/ `useMemo`

* fix: clean up `useEffect`

* refactor: replace `useEffect` w/ `useSWR`

* refactor: remove unused `useCallback`

* refactor: enhance performance and memory management in frontend processes

* refactor: improve pre-push script structure and readability

---------

Co-authored-by: Tunglies <77394545+Tunglies@users.noreply.github.com>
Co-authored-by: Tunglies <tunglies.dev@outlook.com>
2025-07-02 23:34:13 +08:00
wonfen
628de70e89 chore: remove unused imports 2025-06-23 00:09:17 +08:00
Tunglies
1e3566ed7d
fix: optimize asynchronous handling to prevent UI blocking in various components
fix: add missing showNotice error handling and improve async UI feedback

- Add showNotice error notifications to unlock page async error branches
- Restore showNotice for YAML serialization errors in rules/groups/proxies editor
- Ensure all user-facing async errors are surfaced via showNotice
- Add fade-in animation to layout for smoother theme transition and reduce white screen
- Use requestIdleCallback/setTimeout for heavy UI state updates to avoid UI blocking
- Minor: remove window.showNotice usage, use direct import instead
2025-05-30 17:53:40 +08:00
wonfen
7d7c8988d7 fix: resolve rendering issue caused by duplicate node names 2025-04-02 12:48:17 +08:00
wonfen
5a0eb56f70 feat: add AppDataProvider for centralized app data management and optimized refresh logic 2025-03-26 13:26:32 +08:00
wonfen
2b534e0d51 refactor: Optimize proxy rendering and layout calculation 2025-02-20 14:21:55 +08:00
wonfen
31ddccd3e1 perf: Improve proxy list interaction and UI responsiveness 2025-02-18 05:54:16 +08:00
wonfen
6763537f22 refactor: Improve proxy data update mechanism with optimistic UI and error handling 2025-02-18 00:37:55 +08:00
wonfen
31bc644763 feat: Enhance proxy groups with Initials navigation and performance optimizations 2025-02-17 16:07:46 +08:00
wonfen
67ae10b593 perf: optimize node latency refresh rate for faster updates 2025-02-06 07:58:42 +08:00
huzibaca
922020c57a Merge branch 'fix-migrate-tauri2-errors'
* fix-migrate-tauri2-errors: (288 commits)

# Conflicts:
#	.github/ISSUE_TEMPLATE/bug_report.yml
2024-11-24 00:14:46 +08:00
huzibaca
26b8cf6d52 fix: proxy view display error 2024-10-24 06:54:27 +08:00
wonfen
ac1fa7209c update clashmeta core, Imporve UI, merge PR, reset icons, fix CI 2023-11-28 07:49:44 +08:00
GyDi
ef0de04a0f fix: change default column to auto 2022-12-14 16:56:33 +08:00
GyDi
90c9b87f8d feat: auto proxy layout column 2022-12-14 15:07:51 +08:00
GyDi
208b96c092 feat: support to change proxy layout column 2022-12-13 17:34:39 +08:00
GyDi
f00d726347 fix: show global when no rule groups 2022-11-21 23:06:32 +08:00
GyDi
346c964419 fix: group proxies render list is null 2022-11-21 22:10:24 +08:00
GyDi
dace993c21 refactor: adjust base components export 2022-11-20 22:03:55 +08:00
GyDi
9dd3b8fd68 feat: optimize proxy page ui 2022-11-20 19:46:16 +08:00