INFRAIMPACT — Supabase Migration Plan (MVP 2)
INFRAIMPACT — Supabase Migration Plan (MVP 2)
Migrasi dari local-first IndexedDB (MVP 1a) ke Supabase Postgres + Auth + RLS (MVP 2). Target: dashboard tetap di https://rezaprama.github.io/infraimpact/, database & login pindah ke Supabase.
1. Kapan migrasi diperlukan
Migrasi dari local-first ke Supabase WAJIB ketika minimal satu kondisi berikut:
- Anda mulai mengumpulkan data riil (bukan simulasi).
- Ada > 1 enumerator/peneliti yang input data.
- Pembimbing perlu review record tanpa kunjungan ke laptop Anda.
- Data sensitif (transkrip, koordinat presisi, nama operator) masuk ke sistem.
- Anda butuh audit-trail server-side (tidak hilang saat browser cleared).
- Konteks publikasi Q1 menuntut DOI Zenodo dengan dataset replikable.
Selama Anda hanya demo dengan simulasi MVP 1a — tidak perlu Supabase.
2. Cost
- Supabase Free tier: 500 MB DB, 1 GB file storage, 50k MAU. Cukup untuk satu tesis.
- Pro tier ($25/mo): kalau enumerator > 5 atau storage > 500 MB. Diragukan dibutuhkan.
Anda bisa freeze project dan resume kapan saja.
3. Setup awal (one-time, 30 menit)
- Daftar di https://supabase.com → New project → nama:
infraimpact-research. - Region: Singapore (latency Indonesia terbaik).
- Catat di tempat aman:
- Project URL:
https://xxx.supabase.co - anon key (boleh ditaruh di repo publik)
- service_role key (RAHASIA — jangan publish)
- Project URL:
- Database → SQL Editor → run schema migration script di §5.
4. Frontend changes
Tambah di admin.html (atau new assets/js/admin/supabase.js):
<script src="https://cdn.jsdelivr.net/npm/@supabase/supabase-js@2"></script>
<script>
const SUPABASE_URL = 'https://YOUR-PROJECT.supabase.co';
const SUPABASE_ANON = 'YOUR-ANON-KEY';
const supa = supabase.createClient(SUPABASE_URL, SUPABASE_ANON);
</script>
Lalu di setiap modul yang sebelumnya panggil db.put/getAll → ada flag STORAGE_MODE = 'supabase' | 'local'. Jika supabase: pakai supa.from('table').select()/insert()/update().
Pola adapter di assets/js/admin/db-adapter.js (MVP 2):
async function put(table, record) {
if (MODE === 'supabase') {
const { data, error } = await supa.from(table).upsert(record);
if (error) throw error;
return data;
}
return localDb.put(table, record);
}
5. SQL Schema (Supabase Postgres)
-- ============================================================
-- INFRAIMPACT Schema v1.0 — Supabase Postgres
-- ============================================================
create extension if not exists "uuid-ossp";
-- ===== user_profiles =====
create table user_profiles (
user_id uuid references auth.users primary key,
role text not null check (role in ('public_viewer','enumerator','researcher','supervisor','admin')) default 'public_viewer',
full_name text,
affiliation text,
created_at timestamptz default now()
);
-- ===== assets =====
create table assets (
asset_id text primary key,
asset_name text not null,
asset_type text not null check (asset_type in ('IPLT','IPAL_komunal','IPAL_kawasan','TPA','TPST','SPALDT')),
province_code text not null,
province_name text,
district_code text not null,
district_name text,
latitude double precision,
longitude double precision,
construction_year int,
handover_year int,
funding_source text check (funding_source in ('APBN','APBD_I','APBD_II','Hibah','PHLN','Mixed')),
capex_value_idr bigint,
design_capacity_m3_day double precision,
operator_entity text,
data_quality_flag text default 'placeholder',
data_source text,
notes text,
created_by uuid references auth.users,
created_at timestamptz default now(),
updated_at timestamptz default now()
);
create index idx_assets_prov on assets(province_code);
create index idx_assets_dist on assets(district_code);
create index idx_assets_type on assets(asset_type);
-- ===== yearly_observations =====
create table yearly_observations (
observation_id text primary key,
year int not null,
asset_id text not null references assets,
province_code text,
district_code text,
design_capacity double precision,
actual_capacity double precision,
utilization_rate double precision,
idle_capacity_ratio double precision,
service_coverage double precision,
effluent_compliance_rate double precision,
o_and_m_budget bigint,
cost_recovery_ratio double precision,
institution_status text,
regulation_status text,
cause_score double precision,
impact_score double precision,
usefulness_score double precision,
functionality_score double precision,
sustainability_score double precision,
operational_status text,
risk_zone text,
diagnostic_quadrant text,
data_quality_flag text default 'placeholder',
data_source text,
scoring_method_version text,
notes text,
created_by uuid references auth.users,
created_at timestamptz default now(),
updated_at timestamptz default now()
);
create index idx_obs_asset on yearly_observations(asset_id);
create index idx_obs_year on yearly_observations(year);
-- ===== interviews (QUAL) =====
create table interviews (
interview_id text primary key,
interview_date date not null,
location text,
province_code text,
district_code text,
asset_id text references assets,
interviewer text,
respondent_code text not null, -- ANONIMASI ENFORCED via trigger
respondent_role text,
institution_type text,
consent_status text check (consent_status in ('written_consent','verbal_consent','no_consent_recorded')),
anonymity_level text,
interview_mode text,
duration_minutes int,
transcript_summary text,
full_transcript_optional text, -- BLOCKED via trigger if consent != written
key_quotes jsonb,
observed_problem text,
cause_tags text[],
impact_tags text[],
evidence_strength text check (evidence_strength in ('low','medium','high')),
linked_indicators text[],
linked_documents text[],
follow_up_required boolean default false,
researcher_memo text,
coding_status text default 'draft',
created_by uuid references auth.users,
created_at timestamptz default now(),
updated_at timestamptz default now()
);
-- Anonymization guard trigger: clear transcript if consent missing
create or replace function clear_transcript_no_consent() returns trigger as $$
begin
if new.consent_status = 'no_consent_recorded' then
new.full_transcript_optional := null;
end if;
return new;
end;
$$ language plpgsql;
create trigger trg_clear_transcript before insert or update on interviews
for each row execute function clear_transcript_no_consent();
-- ===== fgds, field_observations, documents =====
-- (Schema sejajar — see /docs/RESEARCH_DATA_STUDIO_PLAN.md §4)
-- ===== bwm_* =====
create table bwm_experts (
expert_id text primary key,
expert_code text not null,
affiliation_type text,
years_experience int,
domain_expertise text[],
consent_status text,
created_at timestamptz default now()
);
create table bwm_inputs (
input_id text primary key,
expert_id text references bwm_experts,
block text check (block in ('CAUSE','IMPACT')),
level text,
best_criterion text,
worst_criterion text,
best_to_others jsonb,
others_to_worst jsonb,
created_at timestamptz default now()
);
create table bwm_weights (
weight_id text primary key,
scoring_method_version text,
panel_id text,
block text,
indicator_weights jsonb,
consistency_ratio double precision,
solver_mode text,
derived_from_input_ids text[],
valid_from date,
valid_to date,
notes text,
created_at timestamptz default now()
);
-- ===== indicators, scores =====
create table indicators (
indicator_id text primary key,
block text,
dimension text,
name_id text,
name_en text,
observation_field text,
orientation text,
unit text,
normalization text,
config jsonb
);
create table scores (
score_id text primary key,
asset_id text references assets,
year int,
cause_score double precision,
impact_score double precision,
diagnostic_quadrant text,
risk_zone text,
scoring_inputs jsonb,
derived_from_observation_id text,
derived_from_weight_id text,
scoring_method_version text,
computed_at timestamptz default now(),
computed_by uuid references auth.users
);
-- ===== audit_logs =====
create table audit_logs (
audit_id text primary key default uuid_generate_v4(),
timestamp timestamptz default now(),
user_id uuid references auth.users,
session_id text,
action_type text not null,
table_name text,
record_id text,
field_changed text,
old_value jsonb,
new_value jsonb,
reason_for_change text,
source_document text,
method_version text,
details jsonb
);
create index idx_audit_ts on audit_logs(timestamp);
create index idx_audit_table on audit_logs(table_name);
-- ===== settings =====
create table settings (
key text primary key,
value jsonb,
updated_at timestamptz default now()
);
6. Row Level Security (RLS) Policies
Enable RLS untuk semua table sensitif. Default-deny.
alter table assets enable row level security;
alter table yearly_observations enable row level security;
alter table interviews enable row level security;
alter table fgds enable row level security;
alter table field_observations enable row level security;
alter table documents enable row level security;
alter table scores enable row level security;
alter table audit_logs enable row level security;
alter table bwm_experts enable row level security;
alter table bwm_inputs enable row level security;
alter table bwm_weights enable row level security;
-- Helper: get user role
create or replace function get_role(uid uuid) returns text as $$
select role from user_profiles where user_id = uid;
$$ language sql stable security definer;
-- assets, yearly_observations, indicators, bwm_weights: PUBLIC READ
create policy "public_read_assets" on assets for select to anon, authenticated using (true);
create policy "public_read_obs" on yearly_observations for select to anon, authenticated using (true);
create policy "public_read_indicators" on indicators for select to anon, authenticated using (true);
create policy "public_read_bwm_weights" on bwm_weights for select to anon, authenticated using (true);
-- Write: only researcher/enumerator/admin
create policy "researcher_write_assets" on assets for insert, update, delete to authenticated
using (get_role(auth.uid()) in ('researcher','admin'));
-- Interviews: enumerator can INSERT; researcher can SELECT/UPDATE; supervisor can only SELECT validated; public NEVER
create policy "interviews_enumerator_insert" on interviews for insert to authenticated
with check (get_role(auth.uid()) in ('enumerator','researcher','admin'));
create policy "interviews_researcher_select" on interviews for select to authenticated
using (get_role(auth.uid()) in ('researcher','admin')
or (get_role(auth.uid()) = 'supervisor' and coding_status = 'validated')
or (get_role(auth.uid()) = 'enumerator' and created_by = auth.uid())
);
create policy "interviews_researcher_update" on interviews for update to authenticated
using (get_role(auth.uid()) in ('researcher','admin'));
-- Field/FGD/Documents: same pattern as interviews
-- Audit logs: insert always allowed (via trigger), select restricted
create policy "audit_select_researcher" on audit_logs for select to authenticated
using (get_role(auth.uid()) in ('researcher','supervisor','admin'));
7. Auth flow (Supabase Auth)
// Sign in (magic link, no password)
const { error } = await supa.auth.signInWithOtp({
email: 'reza@example.com',
options: { emailRedirectTo: 'https://rezaprama.github.io/infraimpact/admin.html' }
});
// Get session
const { data: { session } } = await supa.auth.getSession();
const userRole = (await supa.from('user_profiles').select('role').single()).data.role;
// Sign out
await supa.auth.signOut();
UI changes di admin.html:
- Login form → email-only field, kirim magic link.
- Banner pseudo-login dihapus.
- Role badge dari
user_profiles.role. - Sidebar items di-hide berdasarkan role (mis. enumerator tidak lihat Settings tab).
8. Migration script (one-shot)
scripts/migrate-local-to-supabase.js — run di browser console di admin.html (MVP 1a) setelah login Supabase:
async function migrate() {
const tables = ['assets','yearly_observations','interviews','fgds','field_observations','documents','bwm_experts','bwm_inputs','bwm_weights','indicators','scores'];
for (const t of tables) {
const localRecords = await IFI.studio.db.getAll(t);
if (localRecords.length === 0) continue;
console.log(`[migrate] ${t}: ${localRecords.length} records → Supabase`);
// Batch insert with chunking
for (let i = 0; i < localRecords.length; i += 500) {
const chunk = localRecords.slice(i, i+500);
const { error } = await supa.from(t).upsert(chunk);
if (error) { console.error(t, error); break; }
}
}
console.log('[migrate] done');
}
migrate();
9. Storage limits & best practices
- JANGAN simpan binary file (foto, PDF) di Postgres. Pakai Supabase Storage bucket terpisah.
- JANGAN simpan transkrip > 10k karakter raw — chunked atau move ke Storage.
- AUDIT trigger untuk write operation: catat semua INSERT/UPDATE/DELETE.
- Backup: Supabase auto-backup daily (Pro tier). Free tier — manual export rutin.
- DOI ready: setelah data final, freeze via release tag → connect ke Zenodo.
10. Rollback plan
Jika Supabase ada masalah/cost issue:
- Export semua table sebagai CSV via Supabase dashboard.
- Set
STORAGE_MODE = 'local'di admin.html. - Import CSV via Studio → Raw Asset Database → Import CSV.
- Studio kembali jalan local-first.
11. Pertanyaan klarifikasi untuk MVP 2
- Berapa enumerator yang akan aktif?
- Apakah pembimbing perlu akses (role: supervisor)?
- Apakah perlu file storage (foto evidence)? Estimasi GB?
- Apakah email institutional UI bisa dipakai untuk Supabase Auth?
- Apakah data Direktorat Sanitasi yang diunggah akan direplikasi ke Supabase atau tetap local-only?
Status: plan only. Implementation code dipersiapkan setelah Anda konfirmasi go-ahead MVP 2.
