Software Viva Guide

This file is for quick preparation before the presentation and viva.

It explains:

1. One-line explanation

This project is a software-controlled robotic sorting system where a web application controls a robotic arm and conveyor, reads sensors, and runs automation/workflow logic to sort objects based on detection and color.

2. Main software objective

The software has three goals:

  1. Manual control
  1. Automation
  1. Demo / sorting logic

3. High-level architecture

There are 4 major software parts:

A. Frontend

What it does:

Important point:

B. Backend

What it does:

Important point:

C. Persistent runtime state

What it does:

Why it exists:

Important point:

D. Computer vision color detection

What it does:

Important point:

4. Hardware-software responsibility split

Arduino 1

Responsibilities:

Arduino 2

Responsibilities:

Key design choice

Arduino firmware is kept relatively simple.

The complex decision-making is in Python backend:

That is intentional because Python is easier to modify quickly during development.

5. How commands flow in the system

Manual control flow

  1. User moves slider in UI
  2. Frontend sends /send_command
  3. app.py decides which Arduino should receive the command
  4. Serial command is sent
  5. runtime state is updated
  6. UI reflects updated backend state

Workflow flow

  1. User saves positions
  2. User builds workflow from steps
  3. Workflow is saved to workflows.json
  4. User runs workflow
  5. Backend executes steps sequentially

Supported workflow step types:

Automation flow

  1. Arduino 1 sends ultrasonic distance
  2. Backend reads distance
  3. If rule condition matches:
  1. After delay:

Demo flow

The current hardcoded demo loop is:

  1. Start conveyor at M:128
  2. Wait until ultrasonic distance is below 6 cm
  3. Wait 1000 ms
  4. Stop motor
  5. Check camera-detected color
  6. Branch:
  1. Restart conveyor
  2. Repeat until stopped

6. Important data files

positions.json

workflows.json

automations.json

runtime_state.json

7. Why backend-owned state was needed

Earlier, frontend memory was behaving like the source of truth.

That caused a serious robotics problem:

So the design was corrected:

This is a good viva point because it shows safety-aware design thinking.

8. Why color detection was moved away from Arduino logic

Originally, there was an attempt to classify color directly around Arduino-side sensor logic.

That was changed because:

So now:

9. Most important files to mention in viva

If asked “which files matter most?”, answer:

Core control

UI

Firmware

Vision

10. Likely viva questions and safe answers

Q. Why did you use two Arduinos?

Because the system has multiple servos, a conveyor motor, an ultrasonic sensor, LCD, and serial coordination. Splitting responsibilities made pin usage and task separation simpler:

Q. Why is Python backend needed?

Because the backend is the main control layer. It is easier to implement workflows, automations, state persistence, and demo branching in Python than in microcontroller firmware.

Q. Why not do everything on Arduino?

Because higher-level orchestration is easier, faster to modify, and more maintainable in Python. Arduino is used for low-level actuation and sensor interfacing.

Q. How is color detected?

Using a webcam and OpenCV-based HSV color detection in the software layer. The detected color is then used by the backend logic.

Q. How is object presence detected?

Using the HC-SR04 ultrasonic sensor connected to Arduino 1. The distance is streamed to the backend over serial.

Q. What happens when an object is detected in demo mode?

The conveyor is running. When distance goes below 6 cm, the backend waits 1000 ms, stops the motor, checks the camera-detected color, then runs the correct workflow or a reject motor action, and finally restarts the conveyor.

Q. Why do you store runtime state?

To make the UI and backend consistent, preserve execution state across refresh/restart, and avoid depending only on browser memory.

Q. Is persisted pose always equal to real robot pose?

No. Persisted pose is the last commanded software state. Without encoder feedback, physical pose is not guaranteed. That distinction is important for safety.

Q. What is the role of the LCD?

It gives local machine feedback directly on the robot side:

Q. What is future scope?

Full automatic color-based sorting with more reliable classification, richer workflow branching, and more autonomous pick-and-place behavior.

11. What your teammate must remember

If they remember only 5 points, remember these:

  1. app.py is the brain
  2. two Arduinos split low-level hardware control
  3. ultrasonic detects object presence
  4. camera CV detects color
  5. hardcoded demo branches into red / green / reject actions

12. Short emergency answer

If someone asks suddenly, this answer is enough:

“Software-wise, the browser is only the interface. The main control logic is in the Flask backend, which talks to two Arduinos over serial, runs workflows and automations, reads ultrasonic input, gets color from OpenCV, and executes the sorting/demo logic. Runtime state is persisted so the system remains synchronized across sessions.”