If you're like me, you are probably very, very excited when Apple announced a few weeks ago at WWDC that AI features were finally coming to Xcode. But since these AI features require a Mac with Apple Silicon, 16GB of RAM and to install the beta of macOS Sequoia, it's actually possible that you haven't had the time or the opportunity to try them out for yourself. If that's the case, don't worry because I've tried them for you and I'm gonna share my feedback with you. Is it good? Or not? First, a little disclaimer, I'm recording this video in July 2024 after testing the Predictive Code Compression feature on Xcode 16 Beta 1. So if you're watching this video a few months into the feature, please keep in mind that I'm reviewing the very first release of the feature and there is a good chance that the features have been improved since then. So please do keep that in mind. And now, first, let's start with what I've liked. I've really liked the fact that the use case that was shown in the Platform State of the Union video indeed works in real life. So you declare a SwiftUI view, you declare the properties with the data that the view will be displaying, and then Predictive Code Compression is quite capable of suggesting a basic implementation of the view. As you can see, it's quite simple. You only have the basic building block, so VStack and two texts with some basic modifiers. But if you need to implement several SwiftUI views in your work, you definitely know how writing this kind of code over and over tends to become quite dull and annoying because in the end, your added value as an engineer is not in writing this starting boilerplate, but in customizing it with the correct modifiers by adding the specific logic that makes sense for your app. But this part right here, the fact that the AI-INX code is able to write it for you, it's really great. And it's something that I feel will be able to streamline your workflow when you're implementing views in a SwiftUI app. Next, another kind of boilerplate where I also found the AI-INX code quite efficient is when you've declared a struct to store your data model, and then you want to create a mock. So an instance of the struct which contains some mock data, but you probably want some mock data that feels realistic in the sense that you don't want to have just random chains of characters or random numbers. You want to have something that feels real. This way, you will be able to use your mock, for instance, to populate your view when you're displaying it on a preview. So here you can see that I declared a struct that stores data related to movies, and then Predictive Code Completion was indeed able to generate a mocked value for that struct. And you can see the data we get is indeed correct because the matrix is, of course, a real film. You can see that the overview is also correct. The only thing is that the poster path, so you can see that it's not exactly the structure that I want. I will need to remove part of the URL. But when I tried to actually call that URL, it was no longer working. So either it generated some random value that never existed, or it generated a value for a poster path that used to exist, but that has been moved since then. I'm not sure, but at least you can see that most of the data and the data that is, how should I say it, that is not linked to external resources hosted on the internet is valid. And so once again, I can definitely see how this use case can be useful, both when you are developing and you want to have something to display into your preview, but also if you're writing tests and you want to have mocked data so that your test can run on data that is realistic. Another thing that I've really enjoyed is the fact that you can use comments as a simple prompt to direct the AI towards what you want to implement. So here inside the body of a view, more precisely inside the HTAC, somewhere inside the body, I wrote a comment to say that I wanted to display a blue circle. And you can see that the AI made a suggestion, which is almost correct. You can see that it added an extra if statement that in my case doesn't make sense. But if I remove this if statement, the code that it generated is indeed correct. And once again, I think this is a really nice use case because of course it's simple code to write, but maybe that's on the moment you don't remember what's the actual API in SwiftUI you need to use in order to draw a blue circle. Here the AI was able to write this code for you. And so it's able to save you some time from you having to search into documentation what's the API to draw a circle, then what's the API to color it blue, et cetera, et cetera. And finally, last example of what I've liked using predictive code completion in Xcode is that when you want to implement a new feature that's quite similar to a feature already existing in your app, then the AI is quite good at figuring out which piece of code to use, which part to change, and which part should stay the same. So this can be quite useful. Just one word of caution, as you can see, this can feel like a very fancy, a very AI-enabled copy and paste. Remember that code duplication is not a good pattern. And the fact that the AI can help you duplicate code faster is not a good reason to have duplicated code in your app. But still, there are many situations where it makes sense to not create an abstraction that is common to two pieces of code. And you can see that in such situations, the AI is once again, pretty good at helping you streamline your workflow. And now let's move on to what I haven't liked when trying out AI features in Xcode. And you will see that, unfortunately, there are quite a few things that I haven't liked. First, I must say that I'm not 100% satisfied with the ergonomics of how the feature was integrated into Xcode. For instance, there is no visual cue that I could find to know if a prediction is being generated or not. So it happened quite a few times that I put my cursor at the place where I want to have a suggestion for the code that I want to write, and then just wait a few seconds to see if a prediction was actually coming. Sometimes a prediction was indeed coming, but it happened that no prediction was coming and I was just there sitting and waiting for something to happen. Sometimes I had to delete the line and then hit return to put a new line again for a prediction to trigger. So to be honest, it was already a bit frustrating. And I can imagine that if you use the feature in your regular workday where you want to implement things fast because you want to be productive and you want to move from one task to another, I can really imagine this being frustrating. So I really hope that Apple will be able to improve. Unless I'm completely mistaken on how AI works, having a small UI element in Xcode that lets you know that, hey, a prediction is currently being generated would be very useful and not that hard to implement. So I really hope that it will come over the summer as we get more beta releases of Xcode. And also another issue that I had, which is a little bit less annoying, but when compared to a solution like Copilot, it still felt like a bit of a regression, is that there is no real way to ask for an alternate prediction. Predictive code compression is going to give you a prediction, but you cannot ask for another prediction. Whereas if you used Copilot, you know that you can ask for alternate predictions. There is a way to do it, but it's very specific. It's when you use the normal code compression. So, you know, when you write, for instance, an instance dot, and then you have the name of the methods. Then as you move from one method to another, you will get a different prediction, but it's the only case where it's possible. And you can't have alternate predictions for the same piece of code. And I found that was a little bit also of a downside, especially in the cases where the AI didn't make exactly the prediction that I wanted. I would have liked to have an alternate. I didn't have one. Once again, it's something that I really hope that Apple is going to improve over the summer. Now let's talk about what is, for me, the big issue with predictive code compression. If you remember when the feature was announced during the platform State of the Union, this is what Apple said. They said, "We've created specialized coding models that capture expertise only Apple can provide, like the latest APIs, language features, and best practices distilled from decades of building software for all our platforms." As you can see, this is quite a promise. They honestly put the bar quite high. And so I really wanted to challenge the AI to see if it was going to deliver on its promise. So first, I tried to have the AI write code for me using the new Swift testing framework that was just announced also this week at WWDC. And here I was a little bit disappointed because as you can see, the AI understood correctly that I wanted to implement a test. However, it did not suggest to use the syntax of Swift testing, but rather it suggested to use the syntax of XCTest, which is the previous testing framework that Apple was offering in its SDK. You can see that it suggests to use XCT_ASSERT_TRUE and not #ASSERT, which I think is the new syntax of Swift testing. So here, to be fair, it's still a very, very new feature, and it's likely that there wasn't much code available to train the AI on. As you can see, even me, I wasn't exactly sure of what is the actual syntax to do an assertion with the new Swift testing framework. So here, of course, I was a bit disappointed, but it wasn't a deal breaker. But I was much more disappointed in the next example, because I tried to ask the AI to implement a simple network call to fetch some data over HTTP. And as you can see, the AI defaulted to suggesting an implementation which uses a completion handler and not async/await. And here, let's be honest, I was really disappointed because async/await and Swift concurrency is by no means a new framework or a new language feature like Swift testing. Async/await was announced four years ago. Now, back in 2020, there is a ton of code available where async/await is used. And so I was very disappointed that the AI wasn't able to suggest an implementation of a network call using async/await as the default. I think it definitely doesn't deliver on the promise of distilling best practices and being aware of the latest language feature. And the thing that annoys me the most here is that for an experienced developer, it's not a problem. You're going to fix the thing by writing the signature of the function yourself, including async and froze inside the signature of the function. And then for the body of the function, the AI will be able to pick on it. And it will indeed suggest using the async version, the async method in your session. But where I really have a problem is for more junior developers, because if you're a junior developer, since the predictive code function is enabled by default, you will see this prediction. And it might be the case that you won't have enough experience to know that this is not the good way to proceed right now. And since you won't be able to know it, you will just follow along and you will completely miss the current best practice. So here, I really hope that Apple will be able to improve, because for a feature like AI-generated code in Xcode, the way I see the feature is that at its best, when it works like it should, it should be beneficial. But at its worst, when it doesn't work as it should, it should be harmless. And here it's not harmless. I think here it has the potential to be harmful. And to be honest, like I said, this is the instance where I was really, really, really disappointed by what the AI suggested. So we went over the good and the bad. It's time to conclude what's my feeling after trying predictive code completion for the first time. My feeling, my honest feeling is that the feature does have some real potential. I showed you the example where it was able to correctly write boilerplate code for me. And that's something that I really appreciated. So it gives me, how should I say, it gets me hopeful about what the feature could be. But there are also some issues I think cannot be overlooked. Typically, as I said, the ergonomics can be frustrating in a way where I think it could be annoying if you use the feature on a daily basis. And also, there is the issue that the prediction can be harmful in some instances. And I really hope that Apple will be able to fix it. To be honest, I would very much prefer that the AI didn't generate anything rather than generating code that uses things that are no longer a best practice in iOS. So to conclude and recap everything in one sentence, the addition of AI features in Xcode 16 is definitely exciting. I do see that they have the potential to make the day-to-day work of a developer a little bit easier. However, they also currently have some issues which cannot be overlooked. And so, I really hope that by the final release of Xcode 16 in September, these issues will have been fixed by Apple. That's all for this video. I hope that you've enjoyed this new format. If that's the case, please let me know in the comments. As always, don't forget to like and subscribe to my channel so that you don't miss my next videos. Thank you for watching and see you next time. Transcribed by https://otter.ai you you you you you you you you you