Paradigm Shift is Happening in AI Space and Rarely One Notices 

Microsoft Copilot (AI Based, going to all Office, Bing and communication products)

Currently named Copilot for Work..

Categories:  AI
Sunday, March 19, 2023 3:28:44 PM (Central Europe Standard Time, UTC+01:00)  #    Comments



M3GAN is nice and cute.

IMDB link.

Categories:  AI
Friday, January 27, 2023 5:51:52 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Go on, then.. #AI 

Source link: medRxiv

Quite scary.. think.

Categories:  AI
Tuesday, January 17, 2023 8:04:12 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 A Little Late Night Conversation with @OpenAI 

Copy and paste from my conversation with

Response time was under 20 milliseconds. Handling context with no issues. Fascinating.

> So, how are you OpenAI?
I'm doing great! OpenAI is a research laboratory dedicated to finding ways to use artificial intelligence to benefit humanity. We're making advances in machine learning, robotics, and other AI technologies, and we're always looking for new opportunities to make the world a better place.

> What's a garbage collector?
A garbage collector is a program in computer science that is responsible for managing the memory allocation of a computer system in order to ensure that it runs efficiently. It frees up memory that is no longer in use and reclaims memory that is being used inefficiently.

> Should we use it?
Yes, garbage collection is important for the efficient use of memory and can help improve the performance of a computer system. If a computer system is not using garbage collection, it’s likely that it will suffer from memory leaks, which can lead to reduced performance and system instability.

> Hmm, I should have asked differently...
No worries, we're here to help. What other questions do you have about garbage collection?

> What I meant to ask was, should we use it manually or should we leave garbage collection to platforms we use?
It depends on the platform and the application you are developing. If the platform has a built-in garbage collection system, you can usually leave it to the platform to manage memory allocation and reclaim memory. However, in some cases, it might be necessary to manually implement garbage collection in order to optimize the performance of the system.

> Could you give me an example of that?
Sure. For example, if you are developing a game, it might be beneficial to manually implement garbage collection in order to free up memory that is no longer in use and reclaim memory that is being used inefficiently. This can help keep the game running smoothly and improve the overall user experience.

> Any other examples?
Yes, garbage collection can also be used in other areas such as web development and database management. Garbage collection can help reduce the amount of memory that is used by a web application and can help ensure that a database is running efficiently.

> Any other examples when we should trust the platforms for garbage collection?
In many cases, it is best to trust the platforms for garbage collection. For example, if you are developing an application that runs on a popular platform such as Android or iOS, you can usually trust the garbage collection system that is built into the platform. This will ensure that memory is being managed efficiently and that the application is running smoothly.

> What do you mean by 'managed efficiently'?
When garbage collection is managed efficiently, it means that memory is being allocated and reclaimed in a way that maximizes efficiency and minimizes memory leaks. This helps to ensure that the application runs smoothly and does not suffer from performance issues due to memory leaks.

> You are quite knowledgeable about the subject.
Thank you for your kind words!

> What happens if memory is not reclaimed?
If memory is not reclaimed, it can lead to memory leaks and decrease the performance of the application. Memory leaks can also cause the application to crash or become unstable. To prevent these issues, it is important to ensure that memory is being reclaimed efficiently with the help of garbage collection.

> So, what did programmers do when there was no automatic garbage collection?
Before the advent of automatic garbage collection, programmers had to manually manage memory allocation and reclaim memory when necessary. This could be a tedious and time consuming process, and errors in memory management could lead to memory leaks and other performance issues.

> What do they do now in C or Lisp or Fortran?
Nowadays, many programming languages such as C, Lisp, and Fortran have built-in garbage collection systems. This allows programmers to focus more on the application logic, rather than having to manually manage memory allocation and reclaim memory.

> Which platforms or languages do have automatic garbage collection?
Many popular platforms and languages such as Java, JavaScript, Python, and more have built-in garbage collection systems. Additionally, many modern operating systems such as Windows and macOS also have garbage collection systems.

> Does .NET have it?
Yes, .NET has a built-in garbage collection system. The .NET garbage collection system is responsible for managing memory allocation, reclaiming memory when appropriate, and ensuring that the application runs efficiently.

> So.. What are generations in terms of .NET garbage collection?
Generations are a concept used in .NET garbage collection to help improve the performance of the garbage collector. The .NET garbage collector divides objects into three generations based on their lifetime, and objects that have been alive for longer are placed into higher generations. This helps the garbage collector to identify which objects are the most likely to be reclaimed, allowing it to reclaim memory more efficiently.

> Tell me about latency models in this case.
Latency models are used in .NET garbage collection to help reduce the impact that garbage collection has on the performance of the application. The latency model estimates the time it will take for the garbage collector to reclaim memory, and then it schedules garbage collection at times that will have the least impact on the application. This helps to ensure that the application runs smoothly, even when garbage collection is running.

> Any thoughts on weak references for GC?
Weak references are a type of reference that can help the garbage collector reclaim memory more efficiently. Weak references are references to objects that are no longer strongly referenced, and they allow the garbage collector to reclaim the objects that they refer to even if they are not strongly referenced. This can help reduce memory leaks and improve the performance of the application.

> You might just pass a Turing test. What is it?
The Turing test is a test of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. It was proposed by Alan Turing in 1950, and it is still used today to evaluate the performance of AI systems.

> Could you pass it?
My understanding is that I am not the one who is being tested in the Turing test - it is the machine that is being tested. However, it is possible that I could pass the Turing test if I was able to exhibit intelligent behavior that is indistinguishable from that of a human.

> So coming back to GC.. Could you explain the concept of compaction?
Compaction is a process used by the garbage collector to reclaim memory more efficiently. The garbage collector moves objects around in memory to free up memory that is no longer in use and to reduce the amount of memory fragmentation. This helps to reduce memory leaks and improve the performance of the application.

> No, GC as in garbage collection...
Ah, of course! Garbage collection is a process used in computer science to manage the memory allocation of a computer system in order to ensure that it runs efficiently. It frees up memory that is no longer in use and reclaims memory that is being used inefficiently.

> Giving you an option to select between a forced and optimized garbage collection. When would you choose one or the other?
The choice between forced and optimized garbage collection depends on the application and the expected workload of the system. Forced garbage collection is used when the application needs to reclaim memory immediately and the system can tolerate a slight decrease in performance. Optimized garbage collection is used when the application can wait for the garbage collector to reclaim memory and the system's performance needs to be maximized.

> Quite good.
Thank you!

> Could we change topics?
Of course! What would you like to talk about?

> How much power does a 12kW power outlet generate in 12 hours?
A 12kW power outlet will generate 144 kWh (12 kW x 12 hours) of electricity in 12 hours.

> What about 120 hours?
For 120 hours, the 12 kW power outlet will generate 1,440 kWh (12 kW x 120 hours) of electricity.

> So, a 700MW power plant generates how many times more power that this power outlet?
A 700 MW power plant will generate 4,800,000 kWh of electricity in 12 hours (700 MW x 12 hours), which is 33,333 times more power than the 12 kW power outlet.

> What are typical examples of such a powerful power plants? Do they run on coal?
Yes, many large power plants are powered by coal. These power plants are typically used to generate electricity for large cities and regions. Examples of such power plants include the Taichung Power Plant in Taiwan, which has a capacity of 5,500 MW, and the Four Corners Power Plant in the US, which has a capacity of 3,520 MW.

> What about nuclear, which one generates the most power?
The Kashiwazaki-Kariwa Nuclear Power Plant in Japan is the largest nuclear power plant in the world, with a capacity of 7,965 MW. It is followed by the Fukushima Daiichi Nuclear Power Plant in Japan (4,711 MW) and the Bruce Nuclear Generating Station in Canada (4,400 MW).

> And in Europe?
In Europe, the largest nuclear power plant is the Kola Nuclear Power Plant in Russia, with a capacity of 4,800 MW. It is followed by the Cattenom Nuclear Power Plant in France (3,960 MW) and the Olkiluoto Nuclear Power Plant in Finland (2,800 MW).

> And in Slovenia?
In Slovenia, the only nuclear power plant is the Krško Nuclear Power Plant, with a capacity of 688 MW.

> Austria?
In Austria, there are no nuclear power plants.

> Ukraine?
In Ukraine, the largest nuclear power plant is the Zaporizhia Nuclear Power Plant, with a capacity of 6,000 MW.

> You just said Kola nuclear poweplant is the largest in Europe. What gives?
The Kola Nuclear Power Plant in Russia is the largest nuclear power plant in Europe in terms of capacity, but it is not located in Europe itself. It is located in the Murmansk region of Russia, which is east of Finland and Norway.

> So, how many reactors does Zaporizhia consist of?
The Zaporizhia Nuclear Power Plant consists of six nuclear reactors.

> Each generating how many megawatts?
Each of the six reactors at the Zaporizhia Nuclear Power Plant generates 1,000 MW of electricity.

> Thank you. You've been splendid.
You're welcome! It was my pleasure to help.

Categories:  AI
Wednesday, December 7, 2022 1:52:40 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Bing Daily Image - A Bing Image of the Day Download Tool 

Ever wanted to have your Windows desktop background look sharp? Like, Bing Gallery sharp? Let me help you.

Here's a little tool I'm releasing today that allows you to:

  • Get daily Bing Image of the Day to your desktop
  • Perform bulk downloads, multiple images at a time
  • Add image title and description to your wallpaper [1]
  • Run it as a service

It's all in one tool. Small, simple, efficient.

Here's the parameter model it supports:

BingDailyImage v1.0 - Download desktop wallpaper images from Bing
Copyright © 2021, Matevž Gačnik

Gets Bing Picture of the Day images for today and a specified number of days back.

Usage: BingDailyImage [-c] [-d [days]] [-f folder] [-t [top|bottom]] [-b]
   -c             Get current Bing image
   -d [days]      Specifies number of days to fetch.
                  If you omit this parameter the tool will download
                  last two weeks (14 days) of Bing wallpapers.
   -f             Set download folder
                  If you omit this parameter the folder will be
                  set to  - '%USERPROFILE%\Pictures\Bing Wallpapers'.
   -t             Add text (image title and description) to images
                  You can specify text position [top, bottom]. Default is bottom.
   -b             Set last downloaded image as desktop background
   -s install     Installs BingDailyImage as a system service
                  Use -f to specify service download folder path
                  Use -t to let service add text to images
   -s uninstall   Uninstalls BingDailyImage as a system service
   -s start       Starts BingDailyImage service
   -s stop        Stops BingDailyImage service
   -s query       Queries BingDailyImage service state
   -h             Displays help

You can just do a BingDailyImage.exe -c to get the current daily image. By default, it will not tamper with background images, so you'll get the highest resolution available (1920x1200 or 1920x1080), like this:

BingDailyImage v1.0 - Download desktop wallpaper images from Bing
Copyright © 2021, Matevž Gačnik

Downloading Bing Image of the Day for 2021-12-16.
Image date: 2021-12-16
Image title: Old Town in Salzburg, Austria
Image description: When it's lit up like this with a cozy glow, we can admire… When there's a mountain in your city… We're looking at the Old Town portion
of this Baroque city…
Downloading background... Background for 1920x1200 found.
Saving background... Done for 2021-12-16.

Or do a BingDailyImage.exe -d 10 -t to get the last 10 and add a nice, transparent background text to them.

Hell, do a BingDailyImage.exe -s install and forget about it. It's going to download new images once they are published to Bing's servers. All you need to do now is set your Windows 10/11 desktop background to be fetched from the download folder. Done.

Here's the download.

Here's a sample of a downloaded image.

download original image]

[1] You might be surprised about the fun facts you'll learn every day.

Categories:  Other | Personal | Windows 10
Wednesday, December 1, 2021 7:05:46 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Clouds Will Fail 

This is a slightly less technical post, covering my experiences and thoughts on cloud computing as a viable business processing platform.

Recent Amazon EC2 failure was gathering a considerable amount of press and discussion coverage. Mostly discussions revolve around the failure of cloud computing as a promise to never go down, never lose a bit of information.

This is wrong and has been wrong for a couple of years. Marketing people should not be making promises their technical engineers can't deliver. Actually, marketing should really step down from highly technical features and services, in general. I find it funny that there is no serious marketing involved in selling BWR reactors (which fail too), but they probably serve the same amount of people as do cloud services, nowadays.

Getting back to the topic, as you may know EC2 failed miserably a couple of weeks ago. It was something that should not happen - at least in many techie minds. The fault at AWS EC2 cloud was with their EBS storage system, which failed across multiple AWS availability zones within the same AWS region in North Virginia. Think of availability zones as server racks within the same data center and regions as different datacenters.

Companies like Twitter, Reddit, Foursquare, Tekpub, Quora and others all deployed their solutions to the same Amazon region - for example - North Virginia and were thus susceptive to problems within that specific datacenter. They could have replicated across different AWS regions, but did not.

Thus, clouds will fail. It's only a matter of time. They will go down. The main thing clouds deliver is a lower probability of failure, not its elimination. Thinking that cloud computing will solve the industry's fears on losing data or deliver 100% uptime is downright imaginary.

Take a look at EC2's SLA. It says 99.95% availability. Microsoft's Azure SLA? 99.9%. That's approximately +- 7 hours of downtime built in! And we didn't even start to discuss how much junk marketing people will sell.

We are still in IaaS world, although companies are really pushing PaaS and SaaS hard. Having said that, Windows Azure's goal of 'forget about it, we will save you anyway' currently has a lot more merit that other offerings. It is indeed trying to go the PaaS and SaaS route while abstracting the physical machines, racks and local/private datacenters.

Categories:  Architecture | Other
Saturday, May 4, 2019 9:02:27 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 P != NP Proof Failing 

One of the most important steps needed for computer science to get itself to the next level seems to be fading away.

P vs NP

Actually, its proof is playing hard to catch again. This question (whether P = NP or P != NP) does not want to be answered. It could be, that the problem of proving it is also NP-complete.

The (scientific) community wants needs closure. If P != NP would be proven, a lot of orthodox legislature in PKI, cryptography and signature/timestamp validity would probably become looser. If P=NP is true, well, s*!t hits the fan.

Categories:  Other
Sunday, August 19, 2018 9:31:28 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Debugging Windows Azure DevFabric HTTP 500 Errors 

While developing with Windows Azure SDK and local Azure development fabric, when things go wrong, they go really wrong.

It could something as obscure as leaving an empty element somewhere in Web.config or a certificate issue.

The problem is, that before Visual Studio attaches a debugger into a IIS worker process, a lot of things can go wrong. What you get, it this:

There's no Event Log entry, nothing in a typical dev fabric temp folder (always, check 'C:\Users\\AppData\Local\Temp\Visual Studio Web Debugger.log' first), nada.

Poking deeper, what you need to do is allow IIS to respond properly. By default IIS only displays complete error details for local addresses. So, to get a detailed report you need to use a local address.

You can get a local address by fiddling with the site binding with IIS Manager and changing what Azure SDK does, so:

  • First, start IIS Management Console.
  • Then right click on your deployment(*).* and select Edit Bindings.
  • Select All Unassigned

If you hit your web / service role using the new local address (could be something like, you are most likely getting the full disclosure, like this:

In this case, a service was dispatched into dev fabric, which had an empty element somewhere in Web.config. The web role was failing before Visual Studio could attach a debugger, and only HTTP 500 was returned through normal means of communication.

Categories:  .NET 4.0 - WCF | Windows Azure
Monday, April 10, 2017 3:04:49 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 The Case of Lightweight Azure MMC Snap-In Not Installing on Azure SDK 1.6 

There are a couple of Windows Azure management tools, scripts and PowerShell commandlets available, but I find Windows Azure Platform Management Tool (MMC snap-in) one of the easiest to install and use for different Windows Azure subscriptions.

The problem is the tool has not been updated for almost a year and is this failing when you try to install it on the latest Windows Azure SDK (currently v1.6).

Here's the solution.

Categories:  Windows Azure
Saturday, March 26, 2016 11:36:53 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 The Case of Guest OS Versioning in Windows Azure 

There's a notion of Windows Guest OS versions in Windows Azure. Guest OS versions can actually be (in Q1 2012) either a stripped down version of Windows Server 2008 or a similiar version of Windows Server 2008 R2.

You can upgrade your guest OS in Windows Azure Management Portal:

Not that it makes much difference, especially while developing .NET solutions, but I like to be on the newest OS version all the time.

The problem is that the defaults are stale. In 1.6 version of the Windows Azure SDK, the default templates all specify the following:


The osFamily attribute defines OS version, with 1 being Windows Server 2008 and 2 being Windows Server 2008 R2. If you omit the osFamily attribute, the default is 1 too! Actually this attribute should probably move to the Role element, since it defines the version of the role's guest OS.



It doesn't make sense to have it normalized over all roles. Also, this schema makes it impossible to leave it out in VM role instances, where it gets ignored.

The osVersion attribute defines the version which should be deployed for your guest OS. The format is * or WA-GUEST-OS-M.m_YYYYMM-nn. You should never use the latter one. Asterisk, normally, means 'please upgrade all my instances automatically'. Asterisk is your friend.

If you want/need Windows Server 2008 R2, change it in your service configuration XML.

What this means is, that even if you publish and upgrade your guest OS version in the Azure Management Portal, you will get reverted the next time you update your app from within Visual Studio.

Categories:  Windows Azure
Sunday, March 31, 2013 9:40:01 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Bleeding Edge: Pub/Sub Broker Design 

This is the content from my Bleeding Edge talk on pub/sub broker design and implementation.

Due to constraints of the project (European Commission, funded by EU) I cannot publicly distribute the implementation code at this time. I plan to do it after the review process is done. It has been advised, that this probably won't be the case.

Specifically, this is:

  • A message based pub/sub broker
  • Can use typed messages
  • Can be extended
  • Can communicate with anyone
  • Supports push and pull models
  • Can call you back
  • Service based
  • Fast (in memory)
  • Is currently fed by the Twitter gardenhose stream, for your pleasure

Anyway, I can discuss the implementation and design decisions, so here's the PPT (in slovene only).

Downloads: Bleeding Edge 2011, PPTs
Demo front-end: Here

Wednesday, October 5, 2011 10:17:31 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 The Case of Empty OptionalFeatures.exe Dialog 

The following is a three day saga of empty 'Turn Windows Features on or off' dialog.

This dialog, as unimportant as it may seem, is the only orifice into Windows subsystem installations without having to cramp up command line msiexec.exe wizardry on obscure system installation folders that nobody wants to understand.

Empty, it looks like this:

First thing anyone should do when it comes to something obscure like this is:

  1. Reinstall the OS (kidding, but would help)
  2. In-place upgrade of the OS (kidding, but would help faster)
  3. Clean reboot (really, but most probably won't help)
  4. Run chkdsk /f and sfc /scannow (really)
  5. If that does not help, proceed below

If you still can't control your MSMQ or IIS installation, then you need to find out which of the servicing packages got corrupted somehow.

Servicing packages are Windows Update MSIs, located in hell under HKLM/Software/Microsoft/Windows/CurrentVersion/Component Based Servicing/Packages. I've got a couple thousand under there, so the only question is how to get to rough one out of there.

There's a tool, called System Update Readiness Tool [here] that nobody uses. Its side effect is that it checks peculiarities like this. Run it, then unleash notepad.exe on C:\Windows\Logs\CBS\CheckSUR.log and find something like this:

Checking Windows Servicing Packages

Checking Package Manifests and Catalogs
(f) CBS MUM Corrupt 0x800F0900 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~  Line 1:

(f) CBS Catalog Corrupt 0x800B0100 servicing\Packages\  

Then find the package in registry, take ownership of the node, set permissions so you can delete and delete it. Your OptionalFeatures.exe work again and it took only 10 minutes.

Categories:  Other | Windows 7 | Work
Tuesday, June 7, 2011 7:57:17 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Twitter Handle 

I am more active on Twitter lately, finding it amusing personally.


Categories:  Personal
Sunday, March 27, 2011 5:54:45 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Load Test Tool for Windows Server AppFabric Distributed Cache 

During exploration of high availability (HA) features of Windows Server AppFabric Distributed Cache I needed to generate enough load in a short timeframe. You know, to kill a couple of servers.

This is what came out of it.

It's a simple command line tool, allowing you to:

  • Add millions of objects of arbitrary size to the cache cluster (using cache.Add())
  • Put objects of arbitraty size to cache cluster
  • Get objects back
  • Remove objects from cache
  • Has cluster support
  • Has local cache support
  • Will list configuration
  • Will max out you local processors (using .NET 4 Parallel.For())
  • Will perform graceously, even in times of trouble

I talked about this at a recent Users Group meeting, doing a live demo of cache clusters under load.

Typical usage scenario is:

  1. Configure a HA cluster
    Remember, 3 nodes minimum, Windows Server 2008 (R2) Enterprise or DataCenter
  2. Configure a HA cache
  3. Edit App.config, list all available servers
  4. Connect to cluster
  5. Put a bunch of large objects (generate load)
    Since AppFabric currently supports only partitioned cache type, this will distribute load among all cluster hosts. Thus, all hosts will store 1/N percent of objects.
  6. Stop one node
  7. Get all objects back
    Since cache is in HA mode, you will get all your objects back, even though a host is down - cluster will redistribute all the missing cache regions to running nodes.

You can download the tool here.

Categories:  .NET 4.0 - General | Architecture | Microsoft
Thursday, December 9, 2010 2:07:25 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


Copyright © 2003-2023 , Matevž Gačnik
Recent Posts
RSS: Atom:

The opinions expressed herein are my own personal opinions and do not represent my company's view in any way.

My views often change.

This blog is just a collection of bytes.

Copyright © 2003-2023
Matevž Gačnik

Send mail to the author(s) E-mail