I was generating models using the code:-
import Foundation
import CreateML
import TabularData
import CoreML
....
func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] {
var returnmodels = [MLLinearRegressor]()
var result = 0.0
for i in 0...numberofmodels {
let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic))
do {
let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict)
returnmodels.append(tm)
}
catch let error as NSError {
print("Error: \(error.localizedDescription)")
}
}
return returnmodels
}
Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing.
I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0%
What am I doing wrong? Is there a flag I need to set somewhere in Xcode?
This is on an M1 MacBook Pro
Any help would be greatly appreciated
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU).
However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine.
It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file).
based on this example, I've created a simple code to try it:
import Foundation
import CoreML
print("Starting...")
let semaphore = DispatchSemaphore(value: 0)
Task {
await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) {
let v1 = MLTensor([1.0, 2.0, 3.0, 4.0])
let v2 = MLTensor([5.0, 6.0, 7.0, 8.0])
let v3 = v1.matmul(v2)
await v3.shapedArray(of: Float.self) // is 70.0
let m1 = MLTensor(shape: [2, 3], scalars: [
1, 2, 3,
4, 5, 6
], scalarType: Float.self)
let m2 = MLTensor(shape: [3, 2], scalars: [
7, 8,
9, 10,
11, 12
], scalarType: Float.self)
let m3 = m1.matmul(m2)
let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]]
// Supports broadcasting
let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self)
let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self)
let m6 = m4.matmul(m5)
print("Done")
return result;
}
semaphore.signal()
}
semaphore.wait()
Here's what I get on the Instruments app:
Notice how the Neural Engine line shows no usage.
Ive run this test on an M1 Max MacBook Pro.
May i know the bundle identifier for apple intelligence?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope:
"TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language."
Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb
I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help???
(Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
Topic:
Machine Learning & AI
SubTopic:
Core ML
Is there any way to stop GPU work running that is scheduled using metal?
Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display.
Why is this functionality not available when Swift Tasks are able to be canceled?
Topic:
Machine Learning & AI
SubTopic:
General
Hello, I am thinking of buying the MacBook Pro 14" with M4 Pro for ML/AI/ NLP tasks mostly. And since I have only used Windows before, I am wandering if it is compatible with libraries like "Pytorch" and "TensorFlow" etc., or people have experienced problems in installation... Thank you!
Topic:
Machine Learning & AI
SubTopic:
General
Hi everyone,
I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances.
Current Approach
Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case.
Question
Are there any efficient algorithms or data structures I can use to improve performance?
If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4.
please suggest advice on how do I proceed
I am a App designer and I am curious about what specific ML or AI Apple used to develop those features in the system.
As far as I know, Apple's hand-raising detection, destination recommendations in maps, and exercise types in fitness all use ML.
Are there more specific application examples of ML or AI?
Does Apple have a document specifically introducing examples of specific applications of ML or AI technology in the system?
Topic:
Machine Learning & AI
SubTopic:
General
While training a text classifier model with a few thousand samples completes in seconds, when using 100,000 or 1 million samples, CreateML's training time increases exponentially (to hours or days). During these hours/days, GPU usage is low and almost every CPU core is idle. When using the Swift APIs for model training, resource utilization does not increase. I'm using Xcode 16.2, macOS 15.2 on either an M2 Ultra 64 GB or an M3 Max 48 GB laptop (both using built-in SSD with ~500 GB free) running no other applications.
Is there a setting I've missed to allow training to take over more of my computing resources? Is this expected of CreateML (i.e., when looking to exploit a larger corpus, I should move to other tooling)? I'd love to speed up my iteration cycle time.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hello,
I have a CoreML model and I want to convert it to a PyTorch model.
Any ideas if this is possible and if so how?
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hello,
I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode:
VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge?
Thanks
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Swift Student Challenge
Swift
Swift Playground
Vision
I have exported a Pytorch model into a CoreML mlpackage file and imported the model file into my iOS project. The model is a Music Source Separation model - running prediction on audio-spectrogram blocks and returning separated audio source spectrograms.
Model produces correct results vs. desktop+GPU+Python but the inference on iPhone 15 Pro Max is really, really slow. Using Xcode model Performance tool I can see that the inference isn't automatically managed between compute units - all of it runs on CPU. The Performance tool notation hints all that ops should be supported by both the GPU and Neural Engine.
One thing to note, that when initializing the model with MLModelConfiguration option .cpuAndGPU or .cpuAndNeuralEngine there is an error in Xcode console:
`Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: Failed to determine convolution kernel at location at /private/var/containers/Bundle/Application/2E3C4AFF-1FA4-4C95-AAE4-ECEBC0FB0BF9/mymss.app/mymss.mlmodelc/model.mil:2453:12
@ CreateBnnsGraphProgramFromMIL`
Before going back hammering the model in Python, are there any tips/strategies I could try in CoreMLTools export phase or in configuring the model for prediction on iOS?
My export toolchain is currently Linux with CoreMLTools v8.1, export target iOS16.
Dear Apple Developer Team,
I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController.
Currently, Vision supports several GS1 DataBar formats, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it.
Why This Feature Matters:
Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit.
Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology.
Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs.
Feature Request:
Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in:
VNBarcodeSymbology (for Vision framework)
DataScannerViewController (for VisionKit)
This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries.
I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed.
Thank you for your time and support!
Best regards
Hello,
I am currently developing an application that requires barcode scanning using Apple’s Vision framework (VNBarcodeSymbology). I noticed that the framework supports several GS1 DataBar symbologies, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, I could not find any explicit reference to support for GS1 DataBar Stacked (both regular and expanded variants).
Could you confirm whether GS1 DataBar Stacked is currently supported in VisionKit's DataScannerViewController or VNBarcodeObservation? If not, are there any plans to include support for this symbology in a future iOS update?
This functionality is critical for my use case, as GS1 DataBar Stacked barcodes are widely used in retail, pharmaceuticals, and logistics, where space constraints prevent the use of standard GS1 DataBar formats.
I appreciate any clarification on this matter and would be happy to provide additional details if needed.
I'm working on my Swift Student Challenge submission and developing a Vision framework-based image classifier. I want to ensure I'm following best practices for training data and follow to guidelines for what images I use to train my image classifier.
What types of images can I use for training my model?
Are there specific image databases or resources recommended by Apple that are safe to use for Swift Student Challenge submissions?
Currently considering images used from wikipedia, and my own images
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect.
I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Has anyone been able to run Tensorflow > 2.15 with Tensorflow Metal 1.1.0 on M3? I tried several times but was not successful. Seems like development on TensorFlow Metal has paused?
In an App Playground Xcode project there is no Targets menu in the UI, When I try use the model, it says the model is not in scope. When I did it in a regular project it automatically generated a Swift Class and had no erorrs because it had a target but I see no place to add a target on an App playground.
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem:
For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all.
Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string?
Button("Generate", systemImage: "apple.intelligence") {
isPresented = true
}
.imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in
imageURL = url
}
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence