Core ML: Machine Learning for iOS

published in Agile Methodology, iOS Development, Tutorials
by Piotr Przeliorz

You have probably heard phrases like “Machine Learning” or “Artificial Intelligence” without knowing exactly what they mean. In this article, I will shed some light on “What Machine Learning is” by showing you a Core ML iOS sample.

It’s really hard to explain “What Machine Learning is” in one sentence because nowadays it is a sprawling field. In my opinion, the easiest way to reply to that question is: learning from experience.

What is Machine Learning?

In computer science, Machine Learning means that a computer is able to learn from experience in order to understand data (for example; images, text, number or almost anything that we could express as binary data).

Equally important as data is an algorithm, which tells the computer how to learn from that data. After applying that algorithm to the data on a powerful computer, we could get an artifact that is created by the training process as a result. That artifact can be exported to lightweights executable – trained model.

What is a machine learning model?

Apple Inc. introduced Core ML framework in 2017 which provides functionality to integrate trained machine learning models into an iOS or a macOS app.

machine learning models

Apple Core ML model. Source:

As mobile developers, we don’t need to be an expert in machine learning at all. We just want to play with machine learning features, without wasting time on creating our own trained models.

Fortunately, you could find hundreds of trained models on the internet. For example from Caffe Model Zoo, TensorFlow Models or MXNet Model Zoo. That easily converted to Core ML model format (models with a .mlmodel file extension)  using conversion tools like mxnet-to-coreml or tfcoreml.

You can also get a model from Apple Developer which works out of the box.

Let’s get through the process of integrating a Core ML model into your app step by step.

Core ML – Machine Learning iOS

I’ve created a sample project which shows how to use a machine learning model on iOS 11 using Core ML.

For this purpose, I’ve used Vision – a framework for image analysis, which is built on top of Core ML, and the Pixabay API, that allows us to download photos.

core Ml model

Core ML model. Source:

The idea is simple:

  1. Download pictures from Pixabay (thanks to the Kingfisher library for downloading and caching images from the web)
  2. Predict where a photo was taken based on the view from that photo

The first part of the work is pretty easy: make the API call using URLSession, parse it and display the results in UICollectionView.

Once we have got our image downloaded, we can make a prediction. First, we need to create our VNCoreMLModel,  whose base class is a container for a Core ML model with Vision requests (for that app I have used RN1015k500).

After that, we need to initialize our request object, which is a type of VNCoreMLRequest. To do that, we need our previously initialized object from the VNCoreMLModel. The request object is initialized with completion closure which returns VNRequest or Error.

The last object that we need to create for our prediction is VNImageRequestHandler, which accepts data types like:

  • CGImage
  • CVPixelBuffer
  • Data
  • URL

If we try to predict an image, the easiest way is for us to use the CGImage data type. It’s available via the CGImage property on UIImage.

Now that we have got all the necessary components, we can call the perform method on the handler (we can call many requests at the same time because the perform method accepts an array of VNCoreMLRequest).

private func predict(`for` image: CGImage) {
        guard let visionModel = try? VNCoreMLModel(for: mlModel.model) else { return }
        let request = VNCoreMLRequest(model: visionModel) { request, error in
            self.processRequest(request: request)
        request.imageCropAndScaleOption = .centerCrop
        let handler = VNImageRequestHandler(cgImage: image)
        do {
            try handler.perform([request])
        } catch {

The final step of integrating a Core ML model into an app is to map our VNRequest to a data format which can be used in the user interface. In our case, we want to get the location coordinates of the place where a photo was taken.

First, we need access to the prediction results via the result property at the VNRequest object and cast them to an array of VNClassificationObservation. The rest of the work depends on which kind of output our Core ML model returns. Here is a snippet of the code which I used to process the request.

private func processRequest(request: VNRequest) {
        guard let observations = request.results as? [VNClassificationObservation] else {  return }
        let results = observations.prefix(through: 0).map { ($0.identifier, Double($0.confidence)) }.first
        guard let coorindatesData = results?.0 else { return }
        guard let confidence = results?.1 else { return }
        let latLong = coorindatesData.components(separatedBy: "\t").dropFirst().compactMap { Double($0) }
        guard let lat = latLong.first else { return }
        guard let long = latLong.last else { return }
        let result = ImageRecognitionData(latitude: lat, longitude: long, confidence: confidence)
        DispatchQueue.main.async {

Once the whole request is processed, I pass the result in completion and update the location on the map. And that’s it! Now you know how to use a machine learning model on iOS 11 using Core ML.

Core ML: Machine Learning for iOScoreml models


















I hope that with this Core ML: Machine Learning for iOS tutorial I have shed some light on Machine Learning. Remember this is only one of the ways to use Core ML to build more intelligent apps with machine learning. There are a lot of different features related to Core ML: Machine Learning for iOS: for example, you can work with not only pictures but also with text, video or audio.

Check our another article: Test-driven development in Agile PLM: an experimental test.


If you want to learn more about Core ML check out the WWDC sessions from 2017. Feel free to check the entirety of the application code and share your comments about “What is Machine Learning?” below.

Piotr Przeliorz

Regardless of complicated surname, Piotr believes in simple solutions. He loves to create, to modify, to make mistakes in iOS apps and to correct them. And, last but not least, to have fun with every single line of new code.

Popular posts

Mobile Healthcare Applications

Mobile Healthcare Applications

The development of mobile applications dedicated to healthcare has brought about a significant change in the once traditional healthcare industry. What used to involve spending a lot of money, waiting in long queues or consulting many professionals is now often reduced to using a mobile application. By using it, we will make an appointment, consult […]

Read more
Mobile Apps for Hospitals: Hot Trends

Mobile Apps for Hospitals: Hot Trends

The healthcare industry is growing at a rapid pace. Visiting a doctor without a queue or even leaving the house is now possible at almost any time. Although this scenario still sounds like a dream and an idea of perfect health care to many, in practice, we are increasingly convinced that it is possible. The […]

Read more
Clutch Announces Zaven as Top Polish Development in  2019 Eastern European Leaders Award

Clutch Announces Zaven as Top Polish Development in 2019 Eastern European Leaders Award

Since 2011, Zaven has been providing digital solutions for global clients. Our team is composed of experts in custom development, UX/UI design, data analytics, backend integration and more. We aim to build long-term business relationships with our clients in order to create the highest quality and most innovative solutions. Clutch, a B2B market research platform, […]

Read more
Mobile Apps

Get your mobile app in 3 easy steps!


Spec out

with the help of our
business analyst



design, implement
and test, repeat!



get your app out
to the stores

back to top