

In this article, we would scan the Pose using Google ML-KIT in our flutter application. Machine Learning is a hot topic in flutter and in this modern area of technology. I searched a lot on medium here, Github, and other sites and sources, but I didn’t find out the latest and official way which Google recommends to implement the stuff about this topic. That’s why I’m implementing the Google ML-KIT in my articles to provide the updated and right way of bringing the flavor of machine learning to our flutter apps. In the previous ML article, we learned how to label an image in a flutter application using Google ML KIT.
Now this time, the project structure, pattern, and procedure are the same as the previous one (GetX Pattern), but instead of labeling the images, we will do some modifications to Google ML-KIT for detecting the human pose in an image and real-time with the camera.
Now let’s get started.

Create a new flutter project with the name of your choice (ml_kit_pose_detector).
Note: If you people want to use a simple project without GetX then create your own pattern (Controllers(logics) or Views(screens)).
I will use the GetX pattern by using the Get & CLI commands. If you people are not aware of the GET & CLI, please check my previous article and activate the CLI.
To shift the project to the GetX pattern go to your root project terminal and fire the below commands (work only if you enabled the Get & CLI with the article mentioned above)
get init
Now project structure will look like this

Now project structure will look like this;
- The data folder contains the all data we need to store in our project
- The module folder contains all app pages (Screen)
- The Routes folder will contain all routes of the project (automatically created on page creation)
Further Modules folder contains Models, Controllers, and Views to store corresponding files.
1- image_picker here
2- google_mlkit_pose_detectiont here
3- camera here
4- google_mlkit_commons here
5- path provider here
dependencies:
cupertino_icons: ^1.0.2
get: 4.6.5
flutter:
sdk: flutter
image_picker: ^0.8.5+3
google_mlkit_commons: ^0.2.0
google_mlkit_image_labeling: ^0.4.0
path_provider: ^2.0.11
google_mlkit_pose_detection: ^0.4.0
camera: ^0.10.0+1
NOTE: Please do follow the complete guide for the platform (android, ios, web)from the pub. dev while installing the packages.
HomeView is auto-generated you can create a new page with the following command
get create page: your_page_name

1- Home Page
On the home view page, we will initialize the camera to detect the real-time changes in the pose and to detect it.
HomeView.dart
import 'package:flutter/material.dart';import 'package:get/get.dart';
import '../controllers/home_controller.dart';
import 'components/camera_view.dart';
class HomeView extends GetView {
const HomeView({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return GetBuilder(builder: (context) {
return CameraView(
title: 'Pose Detector',
customPaint: controller.customPaint,
text: controller.text,
onImage: (inputImage) {
controller.processImage(inputImage);
},
);
});
}
}
2- Camera View
Create a new directory named components in the views folder. Inside the components directory and create a dart file named as CameraView.dart
CameraView.dart
import 'dart:io';
import 'package:camera/camera.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:google_mlkit_commons/google_mlkit_commons.dart';
import 'package:image_picker/image_picker.dart';import '../../../../../main.dart';
enum ScreenMode { liveFeed, gallery }
class CameraView extends StatefulWidget {
CameraView(
{Key? key,
required this.title,
required this.customPaint,
this.text,
required this.onImage,
this.onScreenModeChanged,
this.initialDirection = CameraLensDirection.back})
: super(key: key);
final String title;
final CustomPaint? customPaint;
final String? text;
final Function(InputImage inputImage) onImage;
final Function(ScreenMode mode)? onScreenModeChanged;
final CameraLensDirection initialDirection;
@override
_CameraViewState createState() => _CameraViewState();
}
class _CameraViewState extends State {
ScreenMode _mode = ScreenMode.liveFeed;
CameraController? _controller;
File? _image;
String? _path;
ImagePicker? _imagePicker;
num _cameraIndex = 0;
double zoomLevel = 0.0, minZoomLevel = 0.0, maxZoomLevel = 0.0;
final bool _allowPicker = true;
bool _changingCameraLens = false;
@override
void initState() {
super.initState();
_imagePicker = ImagePicker();
if (cameras.any(
(element) =>
element.lensDirection == widget.initialDirection &&
element.sensorOrientation == 90,
)) {
_cameraIndex = cameras.indexOf(
cameras.firstWhere((element) =>
element.lensDirection == widget.initialDirection &&
element.sensorOrientation == 90),
);
} else {
_cameraIndex = cameras.indexOf(
cameras.firstWhere(
(element) => element.lensDirection == widget.initialDirection,
),
);
}
_startLiveFeed();
}
@override
void dispose() {
_stopLiveFeed();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
actions: [
if (_allowPicker)
Padding(
padding: EdgeInsets.only(right: 20.0),
child: GestureDetector(
onTap: _switchScreenMode,
child: Icon(
_mode == ScreenMode.liveFeed
? Icons.photo_library_outlined
: (Platform.isIOS
? Icons.camera_alt_outlined
: Icons.camera),
),
),
),
],
),
body: _body(),
floatingActionButton: _floatingActionButton(),
floatingActionButtonLocation: FloatingActionButtonLocation.centerFloat,
);
}
Widget? _floatingActionButton() {
if (_mode == ScreenMode.gallery) return null;
if (cameras.length == 1) return null;
return SizedBox(
height: 70.0,
width: 70.0,
child: FloatingActionButton(
child: Icon(
Platform.isIOS
? Icons.flip_camera_ios_outlined
: Icons.flip_camera_android_outlined,
size: 40,
),
onPressed: _switchLiveCamera,
));
}
Widget _body() {
Widget body;
if (_mode == ScreenMode.liveFeed) {
body = _liveFeedBody();
} else {
body = _galleryBody();
}
return body;
}
Widget _liveFeedBody() {
if (_controller?.value.isInitialized == false) {
return Container();
}
final size = MediaQuery.of(context).size;
// calculate scale depending on screen and camera ratios
// this is actually size.aspectRatio / (1 / camera.aspectRatio)
// because camera preview size is received as landscape
// but we're calculating for portrait orientation
var scale = size.aspectRatio * _controller!.value.aspectRatio;
// to prevent scaling down, invert the value
if (scale < 1) scale = 1 / scale;
return Container(
color: Colors.black,
child: Stack(
fit: StackFit.expand,
children: [
Transform.scale(
scale: scale,
child: Center(
child: _changingCameraLens
? Center(
child: const Text('Changing camera lens'),
)
: CameraPreview(_controller!),
),
),
if (widget.customPaint != null) widget.customPaint!,
Positioned(
bottom: 100,
left: 50,
right: 50,
child: Slider(
value: zoomLevel,
min: minZoomLevel,
max: maxZoomLevel,
onChanged: (newSliderValue) {
setState(() {
zoomLevel = newSliderValue;
_controller!.setZoomLevel(zoomLevel);
});
},
divisions: (maxZoomLevel - 1).toInt() < 1
? null
: (maxZoomLevel - 1).toInt(),
),
)
],
),
);
}
Widget _galleryBody() {
return ListView(shrinkWrap: true, children: [
_image != null
? SizedBox(
height: 400,
width: 400,
child: Stack(
fit: StackFit.expand,
children: [
Image.file(_image!),
if (widget.customPaint != null) widget.customPaint!,
],
),
)
: Icon(
Icons.image,
size: 200,
),
Padding(
padding: EdgeInsets.symmetric(horizontal: 16),
child: ElevatedButton(
child: Text('From Gallery'),
onPressed: () => _getImage(ImageSource.gallery),
),
),
Padding(
padding: EdgeInsets.symmetric(horizontal: 16),
child: ElevatedButton(
child: Text('Take a picture'),
onPressed: () => _getImage(ImageSource.camera),
),
),
if (_image != null)
Padding(
padding: const EdgeInsets.all(16.0),
child: Text(
'${_path == null ? '' : 'Image path: $_path'}nn${widget.text ?? ''}'),
),
]);
}
Future _getImage(ImageSource source) async {
setState(() {
_image = null;
_path = null;
});
final pickedFile = await _imagePicker?.pickImage(source: source);
if (pickedFile != null) {
_processPickedFile(pickedFile);
}
setState(() {});
}
void _switchScreenMode() {
_image = null;
if (_mode == ScreenMode.liveFeed) {
_mode = ScreenMode.gallery;
_stopLiveFeed();
} else {
_mode = ScreenMode.liveFeed;
_startLiveFeed();
}
if (widget.onScreenModeChanged != null) {
widget.onScreenModeChanged!(_mode);
}
setState(() {});
}
Future _startLiveFeed() async {
var cameras = await availableCameras();
final camera = cameras[_cameraIndex.toInt()];
_controller = CameraController(
camera,
ResolutionPreset.high,
enableAudio: false,
);
_controller?.initialize().then((_) {
if (!mounted) {
return;
}
_controller?.getMinZoomLevel().then((value) {
zoomLevel = value;
minZoomLevel = value;
});
_controller?.getMaxZoomLevel().then((value) {
maxZoomLevel = value;
});
_controller?.startImageStream(_processCameraImage);
setState(() {});
});
}
Future _stopLiveFeed() async {
await _controller?.stopImageStream();
await _controller?.dispose();
_controller = null;
}
Future _switchLiveCamera() async {
setState(() => _changingCameraLens = true);
_cameraIndex = (_cameraIndex + 1) % cameras.length;
await _stopLiveFeed();
await _startLiveFeed();
setState(() => _changingCameraLens = false);
}
Future _processPickedFile(XFile? pickedFile) async {
final path = pickedFile?.path;
if (path == null) {
return;
}
setState(() {
_image = File(path);
});
_path = path;
final inputImage = InputImage.fromFilePath(path);
widget.onImage(inputImage);
}
Future _processCameraImage(CameraImage image) async {
final WriteBuffer allBytes = WriteBuffer();
for (final Plane plane in image.planes) {
allBytes.putUint8List(plane.bytes);
}
final bytes = allBytes.done().buffer.asUint8List();
final Size imageSize =
Size(image.width.toDouble(), image.height.toDouble());
final camera = cameras[_cameraIndex.toInt()];
final imageRotation =
InputImageRotationValue.fromRawValue(camera.sensorOrientation);
if (imageRotation == null) return;
final inputImageFormat =
InputImageFormatValue.fromRawValue(image.format.raw);
if (inputImageFormat == null) return;
final planeData = image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList();
final inputImageData = InputImageData(
size: imageSize,
imageRotation: imageRotation,
inputImageFormat: inputImageFormat,
planeData: planeData,
);
final inputImage =
InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData);
widget.onImage(inputImage);
}
}
Now we have created the real-time camera.
3- Coordinates Translator
Create a new dart file inside the components named coordinates_translator.dart. This file will help us to detect the pose by indicating the x, and y coordinates of the image.
import 'dart:io';
import 'dart:ui';import 'package:google_mlkit_commons/google_mlkit_commons.dart';
double translateX(
double x, InputImageRotation rotation, Size size, Size absoluteImageSize) {
switch (rotation) {
case InputImageRotation.rotation90deg:
return x *
size.width /
(Platform.isIOS ? absoluteImageSize.width : absoluteImageSize.height);
case InputImageRotation.rotation270deg:
return size.width -
x *
size.width /
(Platform.isIOS
? absoluteImageSize.width
: absoluteImageSize.height);
default:
return x * size.width / absoluteImageSize.width;
}
}
double translateY(
double y, InputImageRotation rotation, Size size, Size absoluteImageSize) {
switch (rotation) {
case InputImageRotation.rotation90deg:
case InputImageRotation.rotation270deg:
return y *
size.height /
(Platform.isIOS ? absoluteImageSize.height : absoluteImageSize.width);
default:
return y * size.height / absoluteImageSize.height;
}
}
4- Create a Pose Painter
Create a pose painter that will visually create the pose by scanning the input image.
pose_painter.dart
import 'package:flutter/material.dart';
import 'package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart';import 'coordinate_translator.dart';
class PosePainter extends CustomPainter {
PosePainter(this.poses, this.absoluteImageSize, this.rotation);
final List poses;
final Size absoluteImageSize;
final InputImageRotation rotation;
@override
void paint(Canvas canvas, Size size) {
final paint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 4.0
..color = Colors.green;
final leftPaint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 3.0
..color = Colors.yellow;
final rightPaint = Paint()
..style = PaintingStyle.stroke
..strokeWidth = 3.0
..color = Colors.blueAccent;
for (final pose in poses) {
pose.landmarks.forEach((_, landmark) {
canvas.drawCircle(
Offset(
translateX(landmark.x, rotation, size, absoluteImageSize),
translateY(landmark.y, rotation, size, absoluteImageSize),
),
1,
paint);
});
void paintLine(
PoseLandmarkType type1, PoseLandmarkType type2, Paint paintType) {
final PoseLandmark joint1 = pose.landmarks[type1]!;
final PoseLandmark joint2 = pose.landmarks[type2]!;
canvas.drawLine(
Offset(translateX(joint1.x, rotation, size, absoluteImageSize),
translateY(joint1.y, rotation, size, absoluteImageSize)),
Offset(translateX(joint2.x, rotation, size, absoluteImageSize),
translateY(joint2.y, rotation, size, absoluteImageSize)),
paintType);
}
//Draw arms
paintLine(
PoseLandmarkType.leftShoulder, PoseLandmarkType.leftElbow, leftPaint);
paintLine(
PoseLandmarkType.leftElbow, PoseLandmarkType.leftWrist, leftPaint);
paintLine(PoseLandmarkType.rightShoulder, PoseLandmarkType.rightElbow,
rightPaint);
paintLine(
PoseLandmarkType.rightElbow, PoseLandmarkType.rightWrist, rightPaint);
//Draw Body
paintLine(
PoseLandmarkType.leftShoulder, PoseLandmarkType.leftHip, leftPaint);
paintLine(PoseLandmarkType.rightShoulder, PoseLandmarkType.rightHip,
rightPaint);
//Draw legs
paintLine(
PoseLandmarkType.leftHip, PoseLandmarkType.leftAnkle, leftPaint);
paintLine(
PoseLandmarkType.rightHip, PoseLandmarkType.rightAnkle, rightPaint);
}
}
@override
bool shouldRepaint(covariant PosePainter oldDelegate) {
return oldDelegate.absoluteImageSize != absoluteImageSize ||
oldDelegate.poses != poses;
}
}
Go to home_controller and implement the backend logic to input the image and process it with Google ML Kit pose Detector.
Home Controller
import 'package:flutter/material.dart';
import 'package:get/get.dart';
import 'package:google_mlkit_pose_detection/google_mlkit_pose_detection.dart';import '../views/components/pose_painter.dart';
class HomeController extends GetxController {
final PoseDetector _poseDetector =
PoseDetector(options: PoseDetectorOptions());
bool _canProcess = true;
bool _isBusy = false;
CustomPaint? customPaint;
String? text;
//TODO: Implement HomeController
@override
void onInit() {
super.onInit();
}
@override
void onReady() {
super.onReady();
}
@override
void onClose() {
_canProcess = false;
_poseDetector.close();
super.onClose();
}
Future processImage(InputImage inputImage) async {
if (!_canProcess) return;
if (_isBusy) return;
_isBusy = true;
final poses = await _poseDetector.processImage(inputImage);
if (inputImage.inputImageData?.size != null &&
inputImage.inputImageData?.imageRotation != null) {
final painter = PosePainter(poses, inputImage.inputImageData!.size,
inputImage.inputImageData!.imageRotation);
customPaint = CustomPaint(painter: painter);
} else {
text = 'Poses found: ${poses.length}nn';
// TODO: set _customPaint to draw landmarks on top of image
customPaint = null;
}
_isBusy = false;
update();
}
}

****************************************************************
Congratulations! You have learned to detect the pose using the official and proper way of Google`s suggested ML-KIT with the flutter & GetX pattern. Stay tuned for my next article on Machine Learning with flutter.
Clap the article and follow me for more flutter articles.
I am a professional Flutter Mobile Apps Developer, Please reach me to help you with your tasks and projects.