Adding a local coding assistant to VS Code using LMStudio and Continue

I’ve already talk about how impressed I am with #LMStudio (https://lmstudio.ai/). Recently, I came across this article about how to enable a code assistant, such as the excellent OlympicCoder, in VS Code (á la GitHub Copilot) using LMStudio and the Continue extension (https://www.continue.dev/). It’s not very hard, though you do have to edit a config file (horror!), and it works well on my set-up. I would guess it requires beefier hardware than something like Phi-4, but for anyone who games on a PC (or Mac, etc.), I would think you would be fine.

Example of OlympicCoder in VSCode from HuggingFace

Example of OlympicCoder in VSCode from HuggingFace

Leading Change: DVLF>R

Leadership is fundamentally about driving change within an organization – after all, people typically excel at maintaining existing operations. However, I frequently observe leaders struggling with change. At times it is hard to realize that it is needed (or when it is not needed!), but even when a leader concludes that change is required, they often struggle with enacting the change. Years ago, I attended a workshop called “Leading Change,” which introduced me to the DVLF framework. This is a tool that helps with initiating changes, specifically, whether conditions affecting the change are sufficient for it to be successful. It doesn’t pretend to be a full change framework, instead focusing on communicating and initiating the change.

The framework can be summarized with a simple equation:

D×V×L×F > R
Where
D = Dissatisfiers,
V = Vision,
L = Linkages,
F = First steps,
and
R = Resistance to change.

I’ll expand on each of these below, but critically, each of these is an a 0..1 scale, so if any of them is low (weak), the multiplication tends towards zero, indicating that circumstances are insufficient to overcome R, the natural resistance to change.

Dissatisfiers are the reasons that the current state of affairs cannot continue, i.e., the reasons why the organization must do something different. This aspect is critical to get right because of the “Why Learners” in your organization.

Vision for the change is description of how things will be better once the change has been completed. It needs to be reasonably detailed so that it is credible.

Linkages are the people or organizations whose help is needed to make the change successful. This could be your peers, your boss, management, partner groups or companies, etc. (and a bonus one below).

First steps are the specific concrete first few steps needed for the organization to start enacting the change. It is important that these are simple and achievable, as successful completion builds momentum, which helps the change be successful.

Resistance to the change is natural – people generally prefer the status quo, even those of us who like novel things. Resistance to change is often embodied by a few vocal people (or groups), frequently those who are more experienced and/or who have the longest tenure on the project. These can become your most important Linkages – if you can address their concerns, the Vision and First Steps get stronger, and they convert from resisting to supporting the change.

LM Studio

The launch of ChatGPT in 2022 revitalized the conversational AI field by demonstrating their potential usefulness. Of course, the popularity of ChatGPT (and similar apps from others) has created a trove of data that enhances subsequent versions – data otherwise difficult to obtain. I’m generally open to sharing my prompts and providing feedback on model performance. However, privacy concerns arise when dealing with sensitive information. Whilst some platforms prioritize privacy preservation (e.g., Microsoft’s business Copilots and Anthropic’s Claude) most do not, creating a quandary.

Meanwhile, the pace of development in conversational AI is staggering. Indeed, there are openly available models that outperform the proprietary ones of a year or two ago, which is very good indeed. But a model is not a chat bot, and even though it is not a lot of code to get something working (e.g. onnxruntime-genai/examples/python on GitHub), it’s not quite as user friendly as you might wish.

Enter LM Studio (https://lmstudio.ai/), which is an app for running models locally, specifically designed to make it easy to try different models. It connects to Hugging Face and so supports most of the conversational AI models out there. It will use beefy hardware if you have it, but will run the smaller models on a laptop, albeit a bit slowly. You can even use local files – just drag and drop. However, it does not do RAG, so no automated query understanding that goes and fetches some potentially relevant data – you will have to do that yourself. But it’s free and everything runs locally, so privacy preserved!

Kudos to the LM Studio team for a great product!

LM Studio example (copyright LM Studio)