Mission Goal
Create a wearable/handheld “flight monitor” that logs motion events and reports them in a consistent, repeatable format.
Why This Matters
Flight testing is instrument-first: you don’t guess what happened—you measure it. A simple motion logger is the foundation of crash analysis, vibration testing, and launch loads.
What Data You Collect
- Accelerometer readings (x, y, z)
- Derived magnitude (overall acceleration)
- Event markers (e.g., “shake”, “free-fall-like”, “impact”)
- Timestamp or sample index
Hardware / Software Needed
- 1 × micro:bit (v2 recommended for better performance)
- MakeCode (or MicroPython) editor
- USB cable (data logging via serial) or radio to a receiver micro:bit
- Optional: wearable strap / lanyard
Inputs From Other Teams
- Data: Recommend a CSV schema and naming conventions.
- Command & Control: Define what “test run” means and how many runs to collect.
- Comms: If using radio, agree a message rate and channel plan.
What You Must Produce (Deliverables)
- A logger that outputs structured acceleration data.
- An “event rule” (how you detect a shake/impact) written in plain English.
- At least 3 repeatable test runs with comparable results.
Step-by-Step Build
- Decide your sampling approach: every 100ms (10 Hz) is a good start.
- Read accelerometer x/y/z and compute magnitude (sqrt or approximate).
- Detect events:
- Shake: magnitude above a threshold for N samples
- Impact: sudden spike above a higher threshold
- Output each sample as CSV to serial OR send a reduced sample by radio.
- Run a “standard test”: 10 seconds still, 10 seconds walking, 3 shakes, 1 drop onto soft surface.
- Repeat the standard test 3 times, keeping conditions similar.
Data Format / Output
Recommended CSV columns:
t_ms,ax,ay,az,mag,event
Analysis Ideas
- Compare peak magnitudes between runs.
- Count events per run and check repeatability.
- Identify “noise floor” while still, then set thresholds based on measured noise.
Success Criteria
- Data output is consistent and parseable across runs.
- Event detection triggers at the right moments in the standard test.
- Team can justify thresholds based on observed baseline noise.
Evidence Checklist
- Short video/photo of standard test setup
- Three CSV logs (or screenshots of serial output)
- Written event rule + thresholds
- One simple chart or summary (peaks/events per run)
Safety & Privacy
- Do not drop micro:bits onto hard floors; use a soft landing zone.
- Keep the wearable secure (no swinging near faces).
- No personal identifiers in logs.
Common Failure Modes
- Sampling too fast causing lag / missed output.
- Thresholds chosen arbitrarily (not based on baseline data).
- Serial monitor settings incorrect (baud rate / connection).
- Radio messages too large or too frequent.
Stretch Goals
- Auto-calibrate thresholds from 5 seconds of stillness.
- Add a “test run ID” button press to mark segments.
- Compress radio messages (send mag only + event flags).
Scaffolding Example (optional)
You are allowed to reuse structures and formats from other teams — but not their decisions.
Example: “Test Pilot Monitor” data
- Measure acceleration (x/y/z) and compute “movement score”.
- Trigger an alert when score exceeds threshold (e.g., shake event).
- Log 30 seconds of readings (every 200ms) to a simple table.
Example evidence
- Screenshot/photo of readings + a short clip showing the alert trigger.