Sline c2dcd86722
refactor: profile switch (#5197)
* refactor: proxy refresh

* fix(proxy-store): properly hydrate and filter backend provider snapshots

* fix(proxy-store): add monotonic fetch guard and event bridge cleanup

* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses

* docs: UPDATELOG.md

* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info

* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height

* fix(proxy-groups): restrict reduced-height viewport to chain-mode column

* refactor(profiles): introduce a state machine

* refactor:replace state machine with reducer

* refactor:introduce a profile switch worker

* refactor: hooked up a backend-driven profile switch flow

* refactor(profile-switch): serialize switches with async queue and enrich frontend events

* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles

* chore: translate comments and log messages to English to avoid encoding issues

* refactor: migrate backend queue to SwitchDriver actor

* fix(profile): unify error string types in validation helper

* refactor(profile): make switch driver fully async and handle panics safely

* refactor(cmd): move switch-validation helper into new profile_switch module

* refactor(profile): modularize switch logic into profile_switch.rs

* refactor(profile_switch): modularize switch handler

- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs

* refactor(profile_switch): centralize state management and add cancellation flow

- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.

* feat(profile_switch): integrate explicit state machine for profile switching

- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
  Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
  ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.

- workflow/state_machine.rs:1 introduces a dedicated state machine module.
  It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
  `CoreManager::update_config`, failure rollback, and tray/notification side-effects.
  Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.

* refactor(profile-switch): integrate stage-aware panic handling

- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
  Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.

- src-tauri/src/cmd/profile_switch/workflow.rs:25
  Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.

- src-tauri/src/cmd/profile_switch/driver.rs:1
  Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.

* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards

- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.

* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO

* feat(profile-switch): track cleanup and coordinate pipeline

- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)

* feat(profile-switch): unify post-switch cleanup handling

- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
  All failure/timeout paths stash post-switch work into a single CleanupHandle.
  Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.

- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
  Direct driver-side panics now schedule failure cleanup via the shared helper.

* tmp

* Revert "tmp"

This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7.

* refactor: queue frontend events through async dispatcher

* refactor: queue frontend switch/proxy events and throttle notices

* chore: frontend debug log

* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation

- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged

* refactor: refreshClashData

* refactor(proxy): stabilize proxy switch pipeline and rendering

- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot

* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking

* refactor: replace proxy-data event bridge with pure polling and simplify proxy store

- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).

* refactor(proxy): streamline proxies-updated handling and store event flow

- AppDataProvider now treats `proxies-updated` as the fast path: the listener
  calls `applyLiveProxyPayload` immediately and schedules only a single fallback
  `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
  Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
  the multi-stage queue on profile updates completion was removed
  (src/providers/app-data-provider.tsx).

- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
  normalization, and an animation-frame + async queue that applies payloads without
  blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
  into the store (src/stores/proxy-store.ts).

* refactor: switch delay

* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished

- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.

* fix(profiles): prevent duplicate toast on page remount

* refactor(profile-switch): make active switches preemptible and prevent queue piling

- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)

* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk

- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)

* chore(frontend-logs): downgrade routine event logs from info to debug

- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible

* refactor(frontend-emit): make emit_via_app fire-and-forget async

- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)

* refactor(ui): restructure profile switch for event-driven speed + polling stability

- Backend
  - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
  - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
  - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
  - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
  - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
  - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
  - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
      - immediate `globalMutate("getProfiles")` to refresh current profile
      - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
      - forced `mutateSwitchStatus` to correct state
  - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
  - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling

* refactor(frontend): optimize profile switch with optimistic updates

* refactor(profile-switch): switch to event-driven flow with Profile Store

- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed

* fix(app-data): re-hook profile store updates during switch hydration

* fix(notification): restore frontend event dispatch and non-blocking emits

* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor

* fix: ensure switch completion events are received and handle proxies-updated

* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state

* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout

* docs: UPDATELOG.md

* chore: add necessary comments

* fix(core): always dispatch async proxy snapshot after RefreshClash event

* fix(proxy-store, provider): handle pending snapshots and proxy profiles

- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.

* fix(proxy): re-hook tray refresh events into proxy refresh queue

- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.

* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders

- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.

* fix(profile-switch): preserve queued requests and avoid stale connection teardown

- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).

* fix(profile-switch, layout): improve profile validation and restore backend refresh

- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).

* feat(profile-switch): handle cancellations for superseded requests

- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)

* fix(profiles): wrap logging payload for Tauri frontend_log

* fix(profile-switch): add rollback and error propagation for failed persistence

- Added rollback on apply failure so Mihomo restores to the previous profile
  before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
  convert them into CmdResult failures, and trigger rollback + error propagation
  when persistence fails (state_machine.rs:703).

* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks

* fix(profile-switch): preserve pending queue and surface discarded switches

* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches

* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks

* fix(profile-switch): queue concurrent updates and add bounded wait/backoff

* fix(proxy): trigger live refresh on app start for proxy snapshot

* refactor(profile-switch): split flow into layers and centralize async cleanup

- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00

469 lines
16 KiB
Rust

#![allow(non_snake_case)]
#![recursion_limit = "512"]
mod cmd;
pub mod config;
mod constants;
mod core;
mod enhance;
mod feat;
mod module;
mod process;
pub mod utils;
#[cfg(target_os = "linux")]
use crate::utils::linux;
#[cfg(target_os = "macos")]
use crate::utils::window_manager::WindowManager;
use crate::{
core::{EventDrivenProxyManager, handle, hotkey},
process::AsyncHandler,
utils::{resolve, server},
};
use config::Config;
use once_cell::sync::OnceCell;
use tauri::{AppHandle, Manager};
#[cfg(target_os = "macos")]
use tauri_plugin_autostart::MacosLauncher;
use tauri_plugin_deep_link::DeepLinkExt;
use utils::logging::Type;
pub static APP_HANDLE: OnceCell<AppHandle> = OnceCell::new();
/// Application initialization helper functions
mod app_init {
use anyhow::Result;
use super::*;
/// Initialize singleton monitoring for other instances
pub fn init_singleton_check() -> Result<()> {
tauri::async_runtime::block_on(async move {
logging!(info, Type::Setup, "开始检查单例实例...");
server::check_singleton().await?;
Ok(())
})
}
/// Setup plugins for the Tauri builder
pub fn setup_plugins(builder: tauri::Builder<tauri::Wry>) -> tauri::Builder<tauri::Wry> {
#[allow(unused_mut)]
let mut builder = builder
.plugin(tauri_plugin_notification::init())
.plugin(tauri_plugin_updater::Builder::new().build())
.plugin(tauri_plugin_clipboard_manager::init())
.plugin(tauri_plugin_process::init())
.plugin(tauri_plugin_global_shortcut::Builder::new().build())
.plugin(tauri_plugin_fs::init())
.plugin(tauri_plugin_dialog::init())
.plugin(tauri_plugin_shell::init())
.plugin(tauri_plugin_deep_link::init())
.plugin(tauri_plugin_http::init())
.plugin(
tauri_plugin_mihomo::Builder::new()
.protocol(tauri_plugin_mihomo::models::Protocol::LocalSocket)
.socket_path(crate::config::IClashTemp::guard_external_controller_ipc())
.pool_config(
tauri_plugin_mihomo::IpcPoolConfigBuilder::new()
.min_connections(0)
.max_connections(20)
.idle_timeout(std::time::Duration::from_millis(500))
.health_check_interval(std::time::Duration::from_secs(10))
.build(),
)
.build(),
);
// Devtools plugin only in debug mode with feature tauri-dev
// to avoid duplicated registering of logger since the devtools plugin also registers a logger
#[cfg(all(debug_assertions, not(feature = "tokio-trace"), feature = "tauri-dev"))]
{
builder = builder.plugin(tauri_plugin_devtools::init());
}
builder
}
/// Setup deep link handling
pub fn setup_deep_links(app: &tauri::App) -> Result<(), Box<dyn std::error::Error>> {
#[cfg(any(target_os = "linux", all(debug_assertions, windows)))]
{
logging!(info, Type::Setup, "注册深层链接...");
app.deep_link().register_all()?;
}
app.deep_link().on_open_url(|event| {
let url = event.urls().first().map(|u| u.to_string());
if let Some(url) = url {
AsyncHandler::spawn(|| async {
if let Err(e) = resolve::resolve_scheme(url.into()).await {
logging!(error, Type::Setup, "Failed to resolve scheme: {}", e);
}
});
}
});
Ok(())
}
/// Setup autostart plugin
pub fn setup_autostart(app: &tauri::App) -> Result<(), Box<dyn std::error::Error>> {
#[cfg(target_os = "macos")]
let mut auto_start_plugin_builder = tauri_plugin_autostart::Builder::new();
#[cfg(not(target_os = "macos"))]
let auto_start_plugin_builder = tauri_plugin_autostart::Builder::new();
#[cfg(target_os = "macos")]
{
auto_start_plugin_builder = auto_start_plugin_builder
.macos_launcher(MacosLauncher::LaunchAgent)
.app_name(app.config().identifier.clone());
}
app.handle().plugin(auto_start_plugin_builder.build())?;
Ok(())
}
/// Setup window state management
pub fn setup_window_state(app: &tauri::App) -> Result<(), Box<dyn std::error::Error>> {
logging!(info, Type::Setup, "初始化窗口状态管理...");
let window_state_plugin = tauri_plugin_window_state::Builder::new()
.with_filename("window_state.json")
.with_state_flags(tauri_plugin_window_state::StateFlags::default())
.build();
app.handle().plugin(window_state_plugin)?;
Ok(())
}
pub fn generate_handlers()
-> impl Fn(tauri::ipc::Invoke<tauri::Wry>) -> bool + Send + Sync + 'static {
tauri::generate_handler![
cmd::get_sys_proxy,
cmd::get_auto_proxy,
cmd::open_app_dir,
cmd::open_logs_dir,
cmd::open_web_url,
cmd::open_core_dir,
cmd::get_portable_flag,
cmd::get_network_interfaces,
cmd::get_system_hostname,
cmd::restart_app,
cmd::start_core,
cmd::stop_core,
cmd::restart_core,
cmd::notify_ui_ready,
cmd::update_ui_stage,
cmd::get_running_mode,
cmd::get_app_uptime,
cmd::get_auto_launch_status,
cmd::is_admin,
cmd::entry_lightweight_mode,
cmd::exit_lightweight_mode,
cmd::install_service,
cmd::uninstall_service,
cmd::reinstall_service,
cmd::repair_service,
cmd::is_service_available,
cmd::get_clash_info,
cmd::patch_clash_config,
cmd::patch_clash_mode,
cmd::change_clash_core,
cmd::get_runtime_config,
cmd::get_runtime_yaml,
cmd::get_runtime_exists,
cmd::get_runtime_logs,
cmd::get_runtime_proxy_chain_config,
cmd::update_proxy_chain_config_in_runtime,
cmd::invoke_uwp_tool,
cmd::copy_clash_env,
cmd::sync_tray_proxy_selection,
cmd::save_dns_config,
cmd::apply_dns_config,
cmd::check_dns_config_exists,
cmd::get_dns_config_content,
cmd::validate_dns_config,
cmd::get_clash_logs,
cmd::get_verge_config,
cmd::patch_verge_config,
cmd::test_delay,
cmd::get_app_dir,
cmd::copy_icon_file,
cmd::download_icon_cache,
cmd::open_devtools,
cmd::exit_app,
cmd::get_network_interfaces_info,
cmd::get_profiles,
cmd::enhance_profiles,
cmd::patch_profiles_config,
cmd::switch_profile,
cmd::view_profile,
cmd::patch_profile,
cmd::create_profile,
cmd::import_profile,
cmd::reorder_profile,
cmd::update_profile,
cmd::delete_profile,
cmd::read_profile_file,
cmd::save_profile_file,
cmd::get_next_update_time,
cmd::get_profile_switch_status,
cmd::get_profile_switch_events,
cmd::script_validate_notice,
cmd::validate_script_file,
cmd::create_local_backup,
cmd::list_local_backup,
cmd::delete_local_backup,
cmd::restore_local_backup,
cmd::export_local_backup,
cmd::create_webdav_backup,
cmd::save_webdav_config,
cmd::list_webdav_backup,
cmd::delete_webdav_backup,
cmd::restore_webdav_backup,
cmd::export_diagnostic_info,
cmd::get_system_info,
cmd::get_unlock_items,
cmd::check_media_unlock,
cmd::frontend_log,
]
}
}
pub fn run() {
if app_init::init_singleton_check().is_err() {
return;
}
let _ = utils::dirs::init_portable_flag();
#[cfg(target_os = "linux")]
linux::configure_environment();
let builder = app_init::setup_plugins(tauri::Builder::default())
.setup(|app| {
logging!(info, Type::Setup, "开始应用初始化...");
#[allow(clippy::expect_used)]
APP_HANDLE
.set(app.app_handle().clone())
.expect("failed to set global app handle");
if let Err(e) = app_init::setup_autostart(app) {
logging!(error, Type::Setup, "Failed to setup autostart: {}", e);
}
if let Err(e) = app_init::setup_deep_links(app) {
logging!(error, Type::Setup, "Failed to setup deep links: {}", e);
}
if let Err(e) = app_init::setup_window_state(app) {
logging!(error, Type::Setup, "Failed to setup window state: {}", e);
}
resolve::resolve_setup_handle();
resolve::resolve_setup_async();
resolve::resolve_setup_sync();
logging!(info, Type::Setup, "初始化已启动");
Ok(())
})
.invoke_handler(app_init::generate_handlers());
mod event_handlers {
use super::*;
use crate::core::handle;
pub fn handle_ready_resumed(_app_handle: &AppHandle) {
if handle::Handle::global().is_exiting() {
logging!(debug, Type::System, "应用正在退出,跳过处理");
return;
}
logging!(info, Type::System, "应用就绪");
handle::Handle::global().init();
#[cfg(target_os = "macos")]
if let Some(window) = _app_handle.get_webview_window("main") {
let _ = window.set_title("Clash Verge");
}
}
#[cfg(target_os = "macos")]
pub async fn handle_reopen(has_visible_windows: bool) {
handle::Handle::global().init();
if !has_visible_windows {
handle::Handle::global().set_activation_policy_regular();
let _ = WindowManager::show_main_window().await;
}
}
pub fn handle_window_close(api: &tauri::WindowEvent) {
#[cfg(target_os = "macos")]
handle::Handle::global().set_activation_policy_accessory();
if core::handle::Handle::global().is_exiting() {
return;
}
if let tauri::WindowEvent::CloseRequested { api, .. } = api {
api.prevent_close();
if let Some(window) = core::handle::Handle::get_window() {
let _ = window.hide();
}
}
}
pub fn handle_window_focus(focused: bool) {
AsyncHandler::spawn(move || async move {
let is_enable_global_hotkey = Config::verge()
.await
.latest_ref()
.enable_global_hotkey
.unwrap_or(true);
if focused {
#[cfg(target_os = "macos")]
{
use crate::core::hotkey::SystemHotkey;
let _ = hotkey::Hotkey::global()
.register_system_hotkey(SystemHotkey::CmdQ)
.await;
let _ = hotkey::Hotkey::global()
.register_system_hotkey(SystemHotkey::CmdW)
.await;
}
if !is_enable_global_hotkey {
let _ = hotkey::Hotkey::global().init().await;
}
return;
}
#[cfg(target_os = "macos")]
{
use crate::core::hotkey::SystemHotkey;
let _ = hotkey::Hotkey::global().unregister_system_hotkey(SystemHotkey::CmdQ);
let _ = hotkey::Hotkey::global().unregister_system_hotkey(SystemHotkey::CmdW);
}
if !is_enable_global_hotkey {
let _ = hotkey::Hotkey::global().reset();
}
});
}
pub fn handle_window_destroyed() {
#[cfg(target_os = "macos")]
{
use crate::core::hotkey::SystemHotkey;
let _ = hotkey::Hotkey::global().unregister_system_hotkey(SystemHotkey::CmdQ);
let _ = hotkey::Hotkey::global().unregister_system_hotkey(SystemHotkey::CmdW);
}
}
}
std::panic::set_hook(Box::new(|info| {
let payload = info
.payload()
.downcast_ref::<&'static str>()
.map(|s| (*s).to_string())
.or_else(|| info.payload().downcast_ref::<String>().cloned())
.unwrap_or_else(|| "Unknown panic".to_string());
let location = info
.location()
.map(|loc| format!("{}:{}", loc.file(), loc.line()))
.unwrap_or_else(|| "unknown location".to_string());
logging!(
error,
Type::System,
"Rust panic captured: {} @ {}",
payload,
location
);
handle::Handle::notify_rust_panic(payload.into(), location.into());
}));
#[cfg(feature = "clippy")]
let context = tauri::test::mock_context(tauri::test::noop_assets());
#[cfg(feature = "clippy")]
let app = builder.build(context).unwrap_or_else(|e| {
logging!(
error,
Type::Setup,
"Failed to build Tauri application: {}",
e
);
std::process::exit(1);
});
#[cfg(not(feature = "clippy"))]
let app = builder
.build(tauri::generate_context!())
.unwrap_or_else(|e| {
logging!(
error,
Type::Setup,
"Failed to build Tauri application: {}",
e
);
std::process::exit(1);
});
app.run(|app_handle, e| match e {
tauri::RunEvent::Ready | tauri::RunEvent::Resumed => {
if core::handle::Handle::global().is_exiting() {
return;
}
event_handlers::handle_ready_resumed(app_handle);
}
#[cfg(target_os = "macos")]
tauri::RunEvent::Reopen {
has_visible_windows,
..
} => {
if core::handle::Handle::global().is_exiting() {
return;
}
AsyncHandler::spawn(move || async move {
event_handlers::handle_reopen(has_visible_windows).await;
});
}
tauri::RunEvent::ExitRequested { api, code, .. } => {
tauri::async_runtime::block_on(async {
let _ = handle::Handle::mihomo()
.await
.clear_all_ws_connections()
.await;
});
if core::handle::Handle::global().is_exiting() {
return;
}
if code.is_none() {
api.prevent_exit();
}
}
tauri::RunEvent::Exit => {
let handle = core::handle::Handle::global();
if !handle.is_exiting() {
handle.set_is_exiting();
EventDrivenProxyManager::global().notify_app_stopping();
feat::clean();
}
}
tauri::RunEvent::WindowEvent { label, event, .. } if label == "main" => match event {
tauri::WindowEvent::CloseRequested { .. } => {
event_handlers::handle_window_close(&event);
}
tauri::WindowEvent::Focused(focused) => {
event_handlers::handle_window_focus(focused);
}
tauri::WindowEvent::Destroyed => {
event_handlers::handle_window_destroyed();
}
_ => {}
},
_ => {}
});
}