{"id":23712,"date":"2022-12-14T11:59:00","date_gmt":"2022-12-14T10:59:00","guid":{"rendered":"https:\/\/stage-fp.webenv.pl\/blog\/?p=23712"},"modified":"2025-11-07T11:33:52","modified_gmt":"2025-11-07T10:33:52","slug":"ml-in-pl-2022-what-we-learned-during-ml-in-pl-2022-conference","status":"publish","type":"post","link":"https:\/\/www.future-processing.com\/blog\/ml-in-pl-2022-what-we-learned-during-ml-in-pl-2022-conference\/","title":{"rendered":"ML in PL 2022: what we learned during the conference?"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><br>ML in PL 2022 conference<\/h2>\n\n\n\n<p><em><strong>ML in PL<\/strong><\/em> conference is held <strong>annually since 2017.<\/strong> At the beginning it was organised at the Faculty of Mathematics, Informatics and Mechanics of the University of Warsaw, but during the pandemic it was <strong>moved to a virtual platform.<\/strong> This year it <strong>came back to its<\/strong> <strong>original location,<\/strong> after two years of existing just in virtual space.<\/p>\n\n\n\n<p>The <strong>main aims of the conference<\/strong> (and of the ML in PL Association in general) are to:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build<strong> a strong local community<\/strong> of ML researchers, practitioners, and enthusiasts at <strong>various levels<\/strong> of their careers,<\/li>\n\n\n\n<li><strong>Support new generations<\/strong> of students with interests in ML and <strong>promote early research<\/strong> activity,<\/li>\n\n\n\n<li>Foster the <strong>exchange of knowledge in ML,<\/strong><\/li>\n\n\n\n<li>Promote <strong>business engagement in science,<\/strong><\/li>\n\n\n\n<li>Support <strong>international collaboration<\/strong> in ML,<\/li>\n\n\n\n<li>Increase <strong>public understanding <\/strong>of ML.<\/li>\n<\/ul>\n\n\n\n<p>This year\u2019s conference lasted for <strong>three days<\/strong> and was packed with <strong>knowledge, networking, and entertainment.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>ML in PL conference \u2013 agenda<\/h2>\n\n\n\n<p>It all started with a students\u2019 day, where one could listen to <strong>eight presentations<\/strong> done by students or take part in the NVIDIA\u2019s <strong>workshops on mechanics of deep learning.<\/strong><\/p>\n\n\n\n<p>The <strong>core <\/strong>part included:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>9 key-note <strong>lectures<\/strong>,<\/li>\n\n\n\n<li>3 <strong>discussion <\/strong>panels,<\/li>\n\n\n\n<li>9 <strong>contributed talks,<\/strong><\/li>\n\n\n\n<li>4 <strong>sponsors\u2019 talks,<\/strong><\/li>\n\n\n\n<li><strong>a<\/strong> <strong>poster session <\/strong>with 34 posters.<\/li>\n<\/ul>\n\n\n\n<p>Among <strong>many topics covered<\/strong> were learning with positive and unlabelled data, computer vision, probabilistic &amp; auto ML, deep learning, reinforcement learning, NLP, science-related ML, probabilistic neural networks and consolidated learning. Besides<strong> lectures, <\/strong>there were also <strong>multiple sponsors\u2019 booths<\/strong> and <strong>a conference party, <\/strong>giving immense networking possibilities.<\/p>\n\n\n\n<p>The conference was<strong> so rich in topics,<\/strong> lectures, and meetings that it is impossible to cover all of them. That is why I selected four which in my opinion were <strong>the most inspiring and interesting ones.<\/strong> Here they are!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>Can you transfer the best software engineering practices to machine learning code?<\/h2>\n\n\n\n<p>The short answer is <strong>you can, and you should.<\/strong> And you can even get a really nice <strong>assistance <\/strong>with that! This assistance is called <a href=\"https:\/\/kedro.org\/\" rel=\"noopener\">Kedro<\/a>. Kedro is an <strong>open-sourced Python framework <\/strong>for creating <strong>maintainable and modular machine learning code.<\/strong> It was presented by <strong>Dominika Kampa <\/strong>and her colleagues from <strong>QuantumBlack AI by McKinsey.<\/strong><\/p>\n\n\n\n<p>One of its most powerful features is <strong>the pipeline visualisation.<\/strong> The ML code can be very complex and maintaining it as well as explaining it to business is often too much. If one is able to represent the code as a flow with clear input, output, parameters, dependencies and layers, then it\u2019s <strong>a lot easier to grasp the entire solution<\/strong> as well as its bits. <strong>I recommend going through a <a href=\"https:\/\/demo.kedro.org\/\" rel=\"noopener\">demo<\/a>,<\/strong> where you can check out how the visualisation works in practice.<\/p>\n\n\n\n<p>One of the technical underlying aspects of the visualisation is <strong>the project template. <\/strong>This is <strong>how you start<\/strong> the project \u2013&nbsp;by defining directory structure. Afterwards you add the data, create a pipeline with the use of functions and finally package the project by building documentation and preparing it for distribution.<\/p>\n\n\n    <div class=\"b-image js-lightbox\">\n        <figure class=\"b-image__figure\">\n            <a\n                href=\"kedro-project-development.jpg\"\n                class=\"js-lightbox__trigger\"\n                aria-haspopup=\"dialog\"\n                data-elementor-open-lightbox=\"no\"\n            >\n                <img fetchpriority=\"high\" decoding=\"async\" width=\"960\" height=\"764\" src=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development.jpg\" class=\"attachment-full size-full\" alt=\"kedro-project-development\" srcset=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development.jpg 960w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-300x239.jpg 300w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-768x611.jpg 768w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-503x400.jpg 503w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/>            <\/a>\n                            <figcaption class=\"b-image__caption f-paragraph\">Source: https:\/\/kedro.readthedocs.io\/en\/stable\/tutorial\/spaceflights_tutorial.html<\/figcaption>\n                    <\/figure>\n        <div\n    class=\"js-lightbox__dialog o-lightbox\"\n    role=\"dialog\"\n    aria-modal=\"true\"\n    aria-hidden=\"true\"\n    tabindex=\"-1\"\n>\n    <div class=\"o-lightbox__dialog\">\n        <div class=\"o-lightbox__content js-lightbox__content\" role=\"document\">\n            <button\n                class=\"o-button o-button--xs o-button--dark o-button--icon-right o-button--tertiary o-lightbox__close js-lightbox__close m-gradient-brand\"\n            >\n                Close picture                <svg class='o-icon o-icon--16 o-icon--timescircle '>\n            <use xlink:href='#icon-16_times-circle'><\/use>\n          <\/svg>            <\/button>\n                                            <figure class=\"o-lightbox__image is-active\">\n                    <img fetchpriority=\"high\" decoding=\"async\" width=\"960\" height=\"764\" src=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development.jpg\" class=\"attachment-full size-full\" alt=\"kedro-project-development\" srcset=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development.jpg 960w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-300x239.jpg 300w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-768x611.jpg 768w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/kedro-project-development-503x400.jpg 503w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/>                                            <figcaption\n                            class=\"o-lightbox__caption f-paragraph\">Source: https:\/\/kedro.readthedocs.io\/en\/stable\/tutorial\/spaceflights_tutorial.html<\/figcaption>\n                                    <\/figure>\n                    <\/div>\n    <\/div>\n<\/div>\n    <\/div>\n\n\n\n<p>Another interesting feature is <strong>the experiment tracking.<\/strong> The results together with environment description of all your experiments are stored in one place with a possibility to easily go through them. The only thing you need to do is add a few lines of code.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>Does human-AI synergy exist?<\/h2>\n\n\n\n<p>One of <strong>the most inspiring and enthusiastic talks<\/strong> was given by <strong>Petar Veli\u010dkovi\u0107,<\/strong> Staff Research Scientist at DeepMind, Affiliated Lecturer at University of Cambridge and an Associate of Clare Hall, Cambridge. His main research interest is <strong>geometric deep learning,<\/strong> particularly <strong>graph representation learning.<\/strong> This topic is recently becoming popular, both in applications and research. Graphs enable <strong>modelling complex relationships and interdependencies<\/strong> between the objects. They find many applications from social science, through logistics to chemistry and many more. Combined with <strong><a href=\"https:\/\/www.future-processing.com\/services\/ai-and-ml\/\">machine learning,<\/a><\/strong> they demonstrate <strong>ground-breaking achievements,<\/strong> mainly due to their great expressive power.<\/p>\n\n\n\n<p>Among the most renowned success-stories of applying <strong>Graph Neural Networks (GNN),<\/strong> Petar mentioned Halicin antibiotic discovery by MIT and Google Maps expected time of arrival optimisation by DeepMind, delivered with Petar\u2019s contribution.<\/p>\n\n\n\n<p>An interesting question is whether we can also utilise GNNs in abstract domains such as pure mathematics? Together with a group of mathematicians, Petar checked it for a long-standing open conjecture (40 years without a significant progress!) from Representation Theory. The scientists wanted to<strong> understand a relationship between two objects,<\/strong> where one of them could be represented as directed graph \u2013 ideal to utilise GNNs. The method chosen allowed them to analyse and interpret the outputs with the use of attribution techniques. Such <strong>techniques help to understand what features or structures are relevant to the prediction.<\/strong> The group managed to <strong>discover two important structures, which finally led to a mathematical proof.<\/strong><\/p>\n\n\n\n<p>Their work proved that <strong><a href=\"https:\/\/www.future-processing.com\/services\/ai-and-ml\/\">AI<\/a> can inspire and assist humans,<\/strong> even in a very abstract domain, because it augments and guides the domain search. Empowering human intuition, rather than providing an explicit answer, can have <strong>a very powerful impact<\/strong> in the end.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>Do machines see like humans?<\/h2>\n\n\n\n<p><strong>Artificial neural network<\/strong> is an example of <strong>an algorithm highly inspired by nature,<\/strong> i.e., <strong>biological neural networks.<\/strong> It loosely models the work of neurons in a biological brain. This method turned out to be <strong>a very powerful tool<\/strong> which can solve various problems, from understanding text to interpreting speech and recognising images. But does the human-like representation strictly imply that machines cognition and human cognition pay attention to the same characteristics of an object?<\/p>\n\n\n\n<p><strong>Matthias Bethge,<\/strong> Professor of Computational Neuroscience and Machine Learning at the University of T\u00fcbingen and director of the <a href=\"https:\/\/tue.ai\/\" rel=\"noopener\">T\u00fcbingen AI Center<\/a>, decided to verify some inductive priors in computer vision. He <strong>focused on misalignment between human and<\/strong> <strong>machine decision boundaries,<\/strong> which basically means he examined images which were easy to recognise by humans but difficult for convolutional neural networks (CNNs) and vice versa.<\/p>\n\n\n\n<p>One of the inductive priors that was researched was <strong>texture-based classification. <\/strong>The scientist checked the prediction performance of texturized images (generated from original texture synthesis) and benchmarked it with original image predictions. It turned out that this transformation didn\u2019t deteriorate the results. As long as texture is the same as original,<strong> the algorithm performed well.<\/strong> Hence, he decided to go <strong>a step further<\/strong> and constructed a dataset with elements that combined texture of one class with a shape of a different class (for example representing shape of a cat on an elephant\u2019s texture).<\/p>\n\n\n    <div class=\"b-image js-lightbox\">\n        <figure class=\"b-image__figure\">\n            <a\n                href=\"learning-to-see-like-humans.jpg\"\n                class=\"js-lightbox__trigger\"\n                aria-haspopup=\"dialog\"\n                data-elementor-open-lightbox=\"no\"\n            >\n                <img decoding=\"async\" width=\"960\" height=\"523\" src=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans.jpg\" class=\"attachment-full size-full\" alt=\"learning-to-see-like-humans\" srcset=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans.jpg 960w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-300x163.jpg 300w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-768x418.jpg 768w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-734x400.jpg 734w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/>            <\/a>\n                            <figcaption class=\"b-image__caption f-paragraph\">Source: https:\/\/www.youtube.com\/watch?v=q4_-OeE-2Tk<\/figcaption>\n                    <\/figure>\n        <div\n    class=\"js-lightbox__dialog o-lightbox\"\n    role=\"dialog\"\n    aria-modal=\"true\"\n    aria-hidden=\"true\"\n    tabindex=\"-1\"\n>\n    <div class=\"o-lightbox__dialog\">\n        <div class=\"o-lightbox__content js-lightbox__content\" role=\"document\">\n            <button\n                class=\"o-button o-button--xs o-button--dark o-button--icon-right o-button--tertiary o-lightbox__close js-lightbox__close m-gradient-brand\"\n            >\n                Close picture                <svg class='o-icon o-icon--16 o-icon--timescircle '>\n            <use xlink:href='#icon-16_times-circle'><\/use>\n          <\/svg>            <\/button>\n                                            <figure class=\"o-lightbox__image is-active\">\n                    <img decoding=\"async\" width=\"960\" height=\"523\" src=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans.jpg\" class=\"attachment-full size-full\" alt=\"learning-to-see-like-humans\" srcset=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans.jpg 960w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-300x163.jpg 300w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-768x418.jpg 768w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2022\/12\/learning-to-see-like-humans-734x400.jpg 734w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/>                                            <figcaption\n                            class=\"o-lightbox__caption f-paragraph\">Source: https:\/\/www.youtube.com\/watch?v=q4_-OeE-2Tk<\/figcaption>\n                                    <\/figure>\n                    <\/div>\n    <\/div>\n<\/div>\n    <\/div>\n\n\n\n<p>He then compared a fraction of objects correctly classified by shape and a fraction of objects correctly classified by texture. It turned out that<strong> humans rely almost exclusively on shape,<\/strong> while <strong>CNNs were more biased towards using texture information.<\/strong> If CNNs rely strongly on texture, this implies they are also more vulnerable to texture changes. Hence, <strong>we can improve the performance<\/strong> of machines by feeding a model with a training set augmented by randomised textures (also generated with the use of NNs).<\/p>\n\n\n\n<p>What Matthias Bethge has shown is that <strong>we can move closer to the intended solution by comparing machine cognition with human<\/strong> <strong>cognition.<\/strong> In his work, he researched many other approaches which make <strong>machine decision-making more human-like.<\/strong> He constantly proves that crossover between neuroscience and machine learning can significantly empower the latter one.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>What can you infer about society by analysing ML models bias?<\/h2>\n\n\n\n<p>During the poster session, there was a poster which particularly attracted my attention. It was <strong>a poster co-authored by Adam Zadro\u017cny<\/strong> from National Centre for Nuclear Research and University of Warsaw, and <strong>Marianna Zadro\u017cna<\/strong> from Academy of Fine Arts. The researchers examined text-to-image models, trained on the datasets of images and captions crawled from the Internet. They analysed results of DALL-E mini model, which in contrast to DALL-E and DALL-E 2 is more prone to pick up bias from the original datasets.<\/p>\n\n\n\n<p>Bias can be seen as a drawback of the model, but <strong>it can turn into a research tool for a much broader topic,<\/strong> which are <strong>misconceptions consolidated in society.<\/strong> The researchers generated images based on prompts linked to health. What they discovered was that for example, the words \u2018autistic child\u2019 returned only pictures of boys, as if girls didn\u2019t suffer from autism. They also checked prompt \u2018person with depression\u2019, which returned pictures of young adults. This made them think <strong>whether in our collective imagination<\/strong> we take into account that depression can also occur among old people?<\/p>\n\n\n\n<p>These are just two examples, but you can find more of them by<strong> checking results of <a href=\"https:\/\/www.craiyon.com\/\" rel=\"noopener\">DALL-E mini<\/a> on your own.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><br>Is ML in PL conference worth attending?<\/h2>\n\n\n\n<p>Definitely <strong>yes!<\/strong> I\u2019d recommend this event to <strong>everyone interested in machine learning.<\/strong> It provides a lot of inspiring talks, allows getting to know state-of-the-art techniques, and is a great occasion to exchange thoughts with other community members. The thing I like most about this event is that <strong>it strongly expands our horizons.<\/strong><\/p>\n\n\n\n<p>See you next year!<\/p>\n\n\n<div class=\"b-cta-banner m-gradient-light\">\n            <a href=\"https:\/\/www.future-processing.com\/services\/data-solutions\/\" class=\"b-cta-banner__image-container\" data-elementclick=\"article-banner\" data-elementname=\"Data Science and Engineering\">\n            <img decoding=\"async\" width=\"450\" height=\"450\" src=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering.png\" class=\"attachment-full size-full\" alt=\"\" srcset=\"https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering.png 450w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-300x300.png 300w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-150x150.png 150w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-400x400.png 400w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-24x24.png 24w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-48x48.png 48w, https:\/\/www.future-processing.com\/blog\/wp-content\/uploads\/2021\/08\/Data-Science-Engineering-96x96.png 96w\" sizes=\"(max-width: 450px) 100vw, 450px\" \/>        <\/a>\n    \n        <a href=\"https:\/\/www.future-processing.com\/services\/data-solutions\/\" class=\"b-cta-banner__url b-cta-banner__text-container\" data-elementclick=\"article-banner\" data-elementname=\"Data Science and Engineering\">\n                    <div class=\"b-cta-banner__text\">\n                                                    <h3 class=\"f-headline-extra-big b-cta-banner__header\">\n                        Data Science and Engineering                    <\/h3>\n                \n                                    <div class=\"f-paragraph\">\n                        <p>Process data, base business decisions on knowledge and improve your day-to-day operations.<\/p>\n                    <\/div>\n                \n                                    <div class=\"o-button o-button--primary o-button--s o-button--icon-right o-button--arrow\">\n                        <span>Let\u2019s work together<\/span>\n                        <svg class='o-icon o-icon--16 o-icon--arrow '>\n            <use xlink:href='#icon-16_arrow'><\/use>\n          <\/svg>                    <\/div>\n                            <\/div>\n                <\/a>\n    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>The recent ML in PL 2022 conference (organised by ML in PL Association) was a great occasion to look at the machine learning landscape in Poland and broader. Here is more!<\/p>\n","protected":false},"author":181,"featured_media":16893,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[2182],"tags":[],"coauthors":[2009],"class_list":["post-23712","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml"],"acf":{"reading-time":"","show-toc-sublists":false,"image":"","logo":"","button1":{"button1_type":"none","button":""},"button2":{"button2_type":"none","button":""},"person":{"person_photo":"","person_name":"","person_position":""}},"_links":{"self":[{"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/posts\/23712","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/users\/181"}],"replies":[{"embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/comments?post=23712"}],"version-history":[{"count":2,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/posts\/23712\/revisions"}],"predecessor-version":[{"id":34891,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/posts\/23712\/revisions\/34891"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/media\/16893"}],"wp:attachment":[{"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/media?parent=23712"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/categories?post=23712"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/tags?post=23712"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.future-processing.com\/blog\/wp-json\/wp\/v2\/coauthors?post=23712"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}