Documentation

Offline cache

Hive-backed cache-and-network strategy with TTLs and conflict resolution.

The admin app caches data with Hive so list screens render instantly on a cold tab and survive flaky networks. This page describes the cache-and-network strategy used in appmint_mobile/lib/services/ — a thin layer that returns cached data immediately, fires a network request in the background, and updates the UI when the response lands.

Consumer apps don't usually need this — they re-fetch on each screen open. Reach for Hive when:

  • Operators flip between long lists multiple times per minute (leads, orders, customers).
  • The user is regularly offline (field staff, drivers).
  • A screen takes more than ~150ms to render from a cold network call.

Setup

dependencies:
  hive: ^2.2.3
  hive_flutter: ^1.1.0

dev_dependencies:
  hive_generator: ^2.0.1
  build_runner: ^2.4.9

Initialize Hive once at app start:

// lib/main.dart
import 'package:hive_flutter/hive_flutter.dart';

Future<void> main() async {
  WidgetsFlutterBinding.ensureInitialized();
  await Hive.initFlutter();
  await Hive.openBox<String>('cache');         // generic JSON cache
  await Hive.openBox<String>('cache_meta');    // per-key metadata (timestamps)
  // ... rest of bootstrap
}

A single string-keyed box keeps things simple — values are the JSON-encoded payload, metadata (timestamp, etag) lives in a parallel box keyed by the same path.

CacheService

// lib/services/cache_service.dart
import 'dart:convert';
import 'package:hive/hive.dart';

class CacheService {
  static final CacheService _instance = CacheService._internal();
  factory CacheService() => _instance;
  CacheService._internal();

  Box<String> get _data => Hive.box<String>('cache');
  Box<String> get _meta => Hive.box<String>('cache_meta');

  /// Read cached payload. Returns null if missing or stale.
  T? read<T>(String key, {Duration? ttl}) {
    final raw = _data.get(key);
    if (raw == null) return null;

    if (ttl != null) {
      final tsRaw = _meta.get(key);
      if (tsRaw == null) return null;
      final ts = DateTime.parse(tsRaw);
      if (DateTime.now().difference(ts) > ttl) return null;
    }

    try {
      return jsonDecode(raw) as T;
    } catch (_) {
      return null;
    }
  }

  Future<void> write(String key, dynamic value) async {
    await _data.put(key, jsonEncode(value));
    await _meta.put(key, DateTime.now().toIso8601String());
  }

  Future<void> invalidate(String key) async {
    await _data.delete(key);
    await _meta.delete(key);
  }

  Future<void> invalidatePrefix(String prefix) async {
    final keys = _data.keys.where((k) => k.toString().startsWith(prefix)).toList();
    for (final k in keys) {
      await _data.delete(k);
      await _meta.delete(k);
    }
  }

  Future<void> clear() async {
    await _data.clear();
    await _meta.clear();
  }
}

The cache is intentionally untyped — it stores the raw JSON map and lets the service layer re-deserialize. This avoids a maintenance tax of generating Hive type adapters for every model.

Cache-and-network pattern

The standard pattern returns a Stream<List<T>> that emits twice on a cold call: once with cached data, then again with the network response.

// lib/services/leads_service.dart
import 'dart:async';
import '../models/lead_model.dart';
import 'cache_service.dart';
import 'http_client.dart';

class LeadsService {
  final _http = appmintHttp;
  final _cache = CacheService();

  Stream<List<LeadModel>> watchList({
    int page = 1,
    int limit = 20,
    Duration ttl = const Duration(minutes: 5),
  }) async* {
    final key = 'leads:list:p$page:l$limit';

    // 1. Cached read — emit immediately if fresh.
    final cached = _cache.read<List<dynamic>>(key, ttl: ttl);
    if (cached != null) {
      yield cached.map((j) => LeadModel.fromJson(j)).toList();
    }

    // 2. Network fetch — emit again when it lands.
    try {
      final response = await _http.get('/crm/leads/detail', queryParams: {
        'page': page.toString(),
        'limit': limit.toString(),
      });
      final list = (response['data'] ?? []) as List;
      await _cache.write(key, list);
      yield list.map((j) => LeadModel.fromJson(j)).toList();
    } catch (e) {
      // If we already emitted cached data, the user sees something.
      // If not, surface the error.
      if (cached == null) rethrow;
    }
  }
}

A widget consumes it with StreamBuilder:

StreamBuilder<List<LeadModel>>(
  stream: LeadsService().watchList(),
  builder: (context, snapshot) {
    if (snapshot.hasError && !snapshot.hasData) {
      return ErrorView(snapshot.error.toString());
    }
    if (!snapshot.hasData) return const LoadingView();
    return LeadList(leads: snapshot.data!);
  },
);

TTLs

Different collections want different freshness windows.

CollectionSuggested TTLWhy
Leads, contacts5 minFrequently updated, but staleness is rarely costly
Orders1 minStatus changes propagate quickly; users notice stale states
Products1 hourCatalog changes are rare
User profile1 dayAlmost never changes per session
Dashboard KPIs30 secOperators expect near-real-time

Pass ttl: Duration.zero for cache-only-on-failure semantics — never serve stale, only fall back if the network errors.

Pull-to-refresh

Refresh always bypasses the cache and writes the result over the previous entry.

Future<void> refresh() async {
  _cache.invalidatePrefix('leads:list');
  // The next watchList() emission will skip cache and go to network.
}

Wire that into a RefreshIndicator:

RefreshIndicator(
  onRefresh: () async {
    await LeadsService().refresh();
    // The Stream rebuilds itself.
  },
  child: const LeadsListView(),
);

Writes and invalidation

Mutations bypass the cache and invalidate the prefix that could contain them.

Future<LeadModel> create(Map<String, dynamic> data) async {
  final response = await _http.post('/crm/leads/detail', body: data);
  await _cache.invalidatePrefix('leads:list');
  return LeadModel.fromJson(response);
}

This is the simplest correctness model — invalidate widely, refetch on next read. Smarter strategies (insert into the cached list locally, optimistic updates) are easy to bolt on later when you measure what's actually slow.

Conflict resolution on resync

When the device comes back online, it's possible the cached data has diverged from the server. AppEngine's BaseModel\<T\> carries a version field for exactly this case.

If your app supports offline writes (creating leads while in a tunnel and syncing later), implement last-writer-wins by default and detect conflicts via the version:

// Pseudocode for an offline write queue
final pending = await _queueBox.get('lead:$tempId');
try {
  final response = await _http.post('/crm/leads/detail', body: pending);
  await _cache.invalidatePrefix('leads:list');
  await _queueBox.delete('lead:$tempId');
} on Exception catch (e) {
  if (e.toString().contains('Conflict')) {
    // Server has a newer version. Fetch it and let the user choose.
    final remote = await _http.get('/crm/leads/detail/${pending['pk']}');
    await _resolveConflict(remote, pending);
  }
}

For most admin apps, offline writes are out of scope — operators are connected when they take action. Reads-while-offline plus retry-on-error covers 90% of cases without a queue.

What NOT to cache

  • Tokens. Hive is plain disk encryption-by-default-only on iOS. Tokens live in flutter_secure_storage.
  • Personally identifiable raw documents. If a customer requests deletion under privacy law, you must clear their cached data — see the GDPR/CCPA notes in your tenant's compliance docs. The simplest approach is to call Hive.deleteFromDisk() on logout.
  • Anything paginated by absolute index. Cache by query parameters (p1l20), not by "page 2" — page 2's contents change as records are inserted.

Cleaning up on logout

Always wipe the cache when a user signs out. A different user signing in on the same device should never see the previous user's data.

Future<void> signOut() async {
  await CacheService().clear();
  // ... rest of sign-out
}

The next page covers push notifications, which often invalidate caches on receipt — a new lead push, for example, should clear leads:list* so the next visit pulls fresh data.