-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.xml
More file actions
1922 lines (1711 loc) · 172 KB
/
index.xml
File metadata and controls
1922 lines (1711 loc) · 172 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>CONECT | Computational Neuroscience Center @ INT</title>
<link>https://conect-int.github.io/</link>
<atom:link href="https://conect-int.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>CONECT | Computational Neuroscience Center @ INT</description>
<generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 29 Mar 2024 15:00:00 +0000</lastBuildDate>

<item>
<title>Why CONECT?</title>
<link>https://conect-int.github.io/post/about-conect/</link>
<pubDate>Wed, 21 Apr 2021 00:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/post/about-conect/</guid>
<description><p>Neuroscience is in revolution: Over the past decade, tremendous technological advances across several disciplines have dramatically expanded the frontiers of experimentally accessible neuroscientific facts.</p>
<p>Bridging across different spatial and temporal scales, combination of in vivo two photon imaging, large population recording-array technologies, optogenetic circuit control tools, transgenic manipulations as well as large volume circuit reconstructions are now used to examine the function, structure and dynamics of neural networks on an unprecedented level of detail and precision. Current applications of these novel techniques include sensory information processing, motor production, neural correlates of learning, memory and decision making as well as mechanisms of dysfunctions and disease. These experiments have begun to produce a huge amount of data, on a broad spectrum of temporal and spatial scales, providing finer and more quantitative descriptions of the biological reality than we would have been able to dream of only a decade ago. The daunting complexity of the biological reality revealed by these technologies highlights the importance of neurophysics to provide a conceptual bridge between abstract principles of brain function and their biological implementations within neural circuits. This revolution is accompanied by a parallel revolution in the domain of Artificial Intelligence. An exponential number of algorithms in sensory processing, such as image classification, or reinforcement learning have realized practical tools which have replaced the classical tools we were using on a daily basis by a novel range of intelligent tools of a new generation. This is the context in which we are creating CONECT.</p>
<p>We are convinced that <em><strong>the close collaboration between experimentalists and theoreticians in neuroscience is essential to develop mechanistic as well as quantitative understandings of how the brain performs its functions</strong></em>. This is in fact a primary motivating force in establishing this center. However, for such collaborations to be effective, experimentalists must be well aware of the approaches and challenges in modeling while theoreticians must be well acquainted with the experimental techniques, their power and the challenges they present. CONECT has also the ambition to contribute to the training of a new generation of neuroscientists who will have all these qualities.</p>
<p>This approach is therefore complementary but distinct in its purpose from neuroinformatics (creation of tools for analyzing neuroscientific data) or artificial intelligence (creation of algorithms inspired by the functioning of the brain). The field of computational neuroscience is still young but its community is now structured in an autonomous community with strong interaction with the other branches of neuroscience. It is this autonomy that we want to foster at INT.</p></description>
</item>
<item>
<title>Actors of CONECT</title>
<link>https://conect-int.github.io/post/actors-conect/</link>
<pubDate>Mon, 21 Jun 2021 00:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/post/actors-conect/</guid>
<description><p>Within the INT, many components of CONECT already exist, either carried by researchers in computational neurosciences or as themes strongly anchored in this field. A survey of the current situation reveals the existence of projects at different scales.</p>
<ul>
<li><a href="https://conect-int.github.io/contact">Contact</a> us to be added!</li>
</ul>
<ul>
<li>from the cellular to the network level
<ul>
<li>deciphering the biophysical principles underlying robustness of neuronal activity using quantitative genotype-to-phenotype mapping strategies and realistic neuronal model databases (<strong>Jean-Marc Goaillard</strong>).</li>
<li>dynamics and function of small and large-scale neural networks: <strong><a href="https://conect-int.github.io/authors/laurent-u-perrinet/">Laurent Perrinet</a></strong> with <a href="../../author/frederic-y-chavane">Frédéric Chavane</a>, <strong><a href="https://conect-int.github.io/authors/matthieu-gilson/">Matthieu Gilson</a>)</strong></li>
</ul>
</li>
<li>from networks to mesoscopic levels :
<ul>
<li>Bayesian inference and predictive process models (<strong><a href="../../author/anna-montagnini">Anna Montagnini</a></strong>, <a href="../../author/emmanuel-dauce">Emmanuel Daucé</a> and <a href="https://conect-int.github.io/authors/laurent-u-perrinet/">Laurent Perrinet</a>), reinforcement learning, action selection, decision <a href="../../author/andrea-brovelli">Andrea Brovelli</a> and <a href="../../author/emmanuel-dauce">Emmanuel Daucé</a>), link with attentional mechanisms (Guilhem Ibos)</li>
<li>information theory and functional connectivity for the analysis of cognitive brain networks (<strong><a href="../../author/andrea-brovelli">Andrea Brovelli</a></strong>, <strong><a href="https://conect-int.github.io/authors/matthieu-gilson/">Matthieu Gilson</a>)</strong> and <a href="../../author/bruno-giordano">Bruno Giordano</a>)</li>
<li>deep learning for data processing (<strong><a href="../../author/bruno-giordano">Bruno Giordano</a></strong>), deep learning + neuroimaging (<em>in voice perception</em>) (Charly Lamothe) computational neuroscience and data processing in neuroinformatics (Sylvain Takerkart, NIT platform)</li>
</ul>
</li>
<li>at brain level
<ul>
<li>brain anatomy, particularly as applied to the formation of cortical folding (Julien Lefèvre with Guillaume Auzias, Sylvain Takerkart and <strong><a href="../../author/olivier-coulon">Olivier Coulon</a></strong>),</li>
<li>the development of prognostic models of the evolution of certain pathologies (Lionel Velly, <strong>Sylvain Takerkart</strong>),</li>
<li>develop the collaboration of theoretical neurosciences with neuroinformatics, notably with the <a href="http://www.int.univ-amu.fr/spip.php?page=plateform&amp;equipe=CRISE&amp;lang=fr" target="_blank" rel="noopener">NIT</a> (<strong>Sylvain Takerkart</strong>, Guillaume Auzias)</li>
</ul>
</li>
</ul>
<p>A structuring of these different components through a center (independent of existing and future teams) would be a major asset to reach a new stage in the creation of INT³.</p></description>
</item>
<item>
<title>Objectives of CONECT</title>
<link>https://conect-int.github.io/post/objectives-conect/</link>
<pubDate>Tue, 21 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/post/objectives-conect/</guid>
<description><p>The Computational Neuroscience Center (CONECT) is an incubator within the INT to promote theoretical and computational neuroscience.</p>
<p>It ambitions to contribute to the training of a new generation of neuroscientists, following the revolution experienced by neuroscience over the past decades: tremendous technological advances across several disciplines have dramatically expanded the frontiers of experimentally accessible neuro-scientific facts. CONECT is thus concerned with new analysis tools and models that can account for large and complex datasets, in parallel with the NeuroTech Center that focuses on experimental devices.</p>
<p>CONECT aims to build an inter-disciplinary community within the INT to foster interactions between computational neuroscientists and with experimentalists. The tools include scientific animation (journal clubs, seminars) and practical sessions to leanr and master new tools. This will participate in structuring the teaching and research environment around computational neuroscience around AMU beyond INT alone. The plan is to involve not only local research partners of INT like NeuroMarseille and the Laennec Institute, but also engineer schools (PolyTech, Centrale Marseille) and applied mathematics masters.</p></description>
</item>
<item>
<title>2024-03-29 : INT-CONECT seminar by Joe MacInnes</title>
<link>https://conect-int.github.io/talk/2024-03-29-int-conect-seminar-by-joe-macinnes/</link>
<pubDate>Fri, 29 Mar 2024 15:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2024-03-29-int-conect-seminar-by-joe-macinnes/</guid>
<description><ul>
<li>When: Friday, March 29th <em><strong>14:30 to 15:30</strong></em></li>
<li>Where: <em>salle Gastaut</em> (TBC)</li>
</ul>
<p>During this INT/CONECT seminar, <a href="https://www.swansea.ac.uk/staff/william.macinnes/" target="_blank" rel="noopener">Dr Joe MacInnes</a> will present his modelling work on &ldquo;<strong>Casting a wide (neural) net: models and simulations of eye movements and attention</strong>&rdquo;</p>
<blockquote>
<p>Abstract : Eye movements are an excellent proxy for visual attention, and offer a rich source of behavioural data. Decades of neuroscience and experimental results have provided many interesting artefacts that hint at underlying attentional mechanisms. Models and simulations of attention allow us the opportunity to implement our best theories and test them against a wide variety of experimental and imaging results. Simulations of human eye movements, for example, can predict where we allocate attention, the temporal distributions of saccades and even the presence of attentional artifacts like Inhibition of Return, errors, anticipations and even virtual TMS lesions. This talk will cover a couple of recent models that simulate attention and eye movements using deep learning neural nets, spiking network layers and diffusion models to simulate aspects, quirks and mechanisms of human attention.</p>
</blockquote>
<div class="alert alert-note">
<div>
Joe has an interdisciplinary PhD from Dalhousie University in Canada combining computer science (graphics and machine learning) with psychology (eye movements and attention). He has worked in psychology departments (University of Toronto, University of Aberdeen, and HSE University Moscow), computer science departments (Dalhousie University, Canada, St Mary’s University, Canada, and Swansea University Wales) and industry research (Data visualization and AI, Canada). He is currently a senior lecturer in AI at Swansea University department of computer science.
</div>
</div>
</description>
</item>
<item>
<title>2024 March 13-14: Workshop TransINT</title>
<link>https://conect-int.github.io/talk/2024-march-13-14-workshop-transint/</link>
<pubDate>Wed, 13 Mar 2024 09:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2024-march-13-14-workshop-transint/</guid>
<description><ul>
<li>When: Thursday-Friday <em><strong>9:00 to 19:00</strong></em></li>
<li>Where: <em>salle Henri Gastaut</em></li>
</ul>
<p>All details on <a href="https://conect-int.github.io/transint.github.io/" target="_blank" rel="noopener">https://conect-int.github.io/transint.github.io/</a></p>
<blockquote>
<p>Speakers
Schedule
Venue</p>
</blockquote>
<div class="alert alert-note">
<div>
This workshop is supported by INT and NeuroMarseille.
</div>
</div>
</description>
</item>
<item>
<title>2024-03-11 : CONECT seminar by Danny Burnham</title>
<link>https://conect-int.github.io/talk/2024-03-11-conect-seminar-by-danny-burnham/</link>
<pubDate>Mon, 11 Mar 2024 15:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2024-03-11-conect-seminar-by-danny-burnham/</guid>
<description><ul>
<li>When: Monday, March 11th <em><strong>14:30 to 16:00</strong></em></li>
<li>Where: <em>salle Laurent Vinay</em></li>
</ul>
<p>During this CONECT seminar, <a href="https://ion.uoregon.edu/about/person-page/277" target="_blank" rel="noopener">Danny Burnham</a> will present his recent work on &ldquo;<strong>Mice alternate between inference- and stimulus-bound strategies during probabilistic foraging</strong>&rdquo;</p>
<blockquote>
<p>Abstract : Essential features of the world are often hidden and must be inferred by constructing internal models based on indirect evidence. During foraging, animals must continually choose between trying to exploit a depleting food source at their current location and leaving to explore a new source at the expense of costly travel epochs. In a deterministic environment, the optimal strategy is to leave the current site when the immediate rate of reward drops below the average rate - a stimulus-bound strategy, assigning each action a value that is updated based on its immediate outcome. This strategy, however, is not optimal in a realistic foraging scenario, where rewards are encountered probabilistically and the optimal strategy is inference-bound, requiring the animal to infer the hidden structure of the world. Motivated by recent studies showing that mice alternate between discrete strategies during perceptual decision-making, we test the hypothesis that mouse behavior during a probabilistic foraging task switches between inference- and stimulus-bound strategies within the same session. To this end, we developed a novel hidden Markov model with linear emissions (LM-HMM) to capture this switching dynamic. When applied to mice engaged in the task, the LM-HMM revealed that mice switch between distinct inference bound and stimulus bound strategies exhibiting varying impulsivities.</p>
</blockquote>
<div class="alert alert-note">
<div>
Danny Burnham is a computational neuroscientist from the <a href="https://ion.uoregon.edu/about/person-page/277" target="_blank" rel="noopener">Institute of Neuroscience at the university of Oregon</a> interested in Artificial Neural Network models of learning and memory.
</div>
</div>
</description>
</item>
<item>
<title>2024-03-29 : CONECT seminar by Prof Elia Formisano</title>
<link>https://conect-int.github.io/talk/2024-03-29-conect-seminar-by-prof-elia-formisano/</link>
<pubDate>Tue, 05 Mar 2024 15:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2024-03-29-conect-seminar-by-prof-elia-formisano/</guid>
<description><ul>
<li>When: Tuesday, March 5th <em><strong>10:00 to 11:00</strong></em></li>
<li>Where: <em>salle Vinay</em> (R+1)</li>
</ul>
<p>During this INT/CONECT seminar, <a href="https://www.maastrichtuniversity.nl/e-formisano" target="_blank" rel="noopener">Prof. Elia Formisano</a> will present his recent work on the neuroscience and computational modelling of natural sounds &ldquo;<strong>Auditory Cognition: Bridging Human and Machine Perspectives</strong>&rdquo;</p>
<blockquote>
<p>Abstract : The ability to recognize and interpret sounds is crucial for both humans and, increasingly, machines. From the chirping of birds to the sirens of emergency vehicles, sound perception allows us to understand events and identify objects, even in challenging contexts like darkness or behind barriers where visual information lacks. Drawing on interdisciplinary research from cognitive psychology, neuroscience, and artificial intelligence (AI), I will discuss current models of how the human brain processes natural sounds, transforming complex acoustic waveforms into meaningful semantic representations. I will then explore potential directions for collaborative developments in AI and neuroscience, framed as a tool for deepening our understanding of the neural computations involved in the extraction of diverse semantic information from naturalistic soundscapes.</p>
</blockquote>
<div class="alert alert-note">
<div>
Elia Formisano received his MSc degree in Electronic Engineering in 1996 from the University of Naples (Italy) and his PhD from the national (Italian) program in Bioengineering in 2000. Thanks to an outgoing grant, in 1998-1999, he was a visiting research fellow at the Max Planck Institute for Brain Research in Frankfurt/Main. In January 2000, he was appointed Assistant Professor at Maastricht University (Faculty of Psychology and Neuroscience) where he is now Professor of Neuroimaging Methods: Neural Signal Analysis. In 2008-2013, he has been Head of the Department of Cognitive Neuroscience. He is the scientific director of the Maastricht Brain Imaging Center (MBIC), Principal Investigator of the Auditory Perception and Cognition group and founding member of the Maastricht Center for Systems Biology (MaCSBio). His research is supported by several national (e.g. NWO VIDI, VICI, Gravitation) and international funding sources. His research aims at discovering the neural basis of human auditory perception, cognition and plasticity He pioneered the use of ultra-high magnetic field (7 Tesla) functional MRI and multivariate modeling in neuroscience studies of audition. He is actively involved in methods development, focusing on algorithms for unsupervised and supervised learning. On these topics, he has published in high ranked journals, including Science, Neuron, PNAS, Current Biology. He has about 20 years of teaching experience, which includes the development of courses and curricula at bachelor, master and graduate school level on topics of cognitive neuroscience, neuroimaging and biomedical engineering (biomedical signal and image analysis). In 2008-2010, he has been Chair of the Educational Program for the Organization for Human Brain Mapping (OHBM) meetings. <a href="[https://www.maastrichtuniversity.nl/e-formisano]%28https://scholar.google.com/citations?user=WTnN8C0AAAAJ%29">Google Scholar</a>
</div>
</div>
</description>
</item>
<item>
<title>2023-10-02 : CONECT seminar by Lorenzo Fontolan</title>
<link>https://conect-int.github.io/talk/2023-10-02-conect-seminar-by-lorenzo-fontolan/</link>
<pubDate>Mon, 02 Oct 2023 15:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-10-02-conect-seminar-by-lorenzo-fontolan/</guid>
<description><ul>
<li>When: Monday <em><strong>15:00 to 16:00</strong></em></li>
<li>Where: <em>salle Laurent Vinay</em></li>
</ul>
<p>During this CONECT seminar, <a href="https://fontolanl.github.io/" target="_blank" rel="noopener">Lorenzo Fontolan</a> will present his recent work on &ldquo;<strong>Neural mechanisms of memory-guided motor learning</strong>&rdquo;</p>
<blockquote>
<p>Abstract : TBA</p>
</blockquote>
<div class="alert alert-note">
<div>
Lorenzo Fontolan is a computational neuroscientist interested in how neural interactions give rise to cognitive phenomena, how brain circuits change during learning, and how mental disorders disrupt communication pathways in the brain. Currently he is a CENTURI Group Leader at Aix-Marseille University in France.
</div>
</div>
</description>
</item>
<item>
<title>2023-09-22: Coding Club x CONECT on Optuna</title>
<link>https://conect-int.github.io/talk/2023-09-22-coding-club-x-conect-on-optuna/</link>
<pubDate>Fri, 22 Sep 2023 11:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-09-22-coding-club-x-conect-on-optuna/</guid>
<description><ul>
<li>When: On Fridays at <em><strong>11.00am to 12.00am</strong></em></li>
<li>Where: <em>salle Laurent Vinay (INT)</em></li>
</ul>
<p>This is the first occurrence of monthly sessions on advanced computational tools for neuroscience to occur during the <a href="https://framateam.org/int-marseille/channels/coding-club" target="_blank" rel="noopener">coding club</a> organized by the <a href="https://www.int.univ-amu.fr/plateformes/nit" target="_blank" rel="noopener">NIT</a>.</p>
<p>This Friday 22th Sept, we will have a look into <a href="https://optuna.org/" target="_blank" rel="noopener">Optuna</a>, which is a generic-purpose optimizer for hyperparameters. It combines a diversity of <a href="https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/003_efficient_optimization_algorithms.html" target="_blank" rel="noopener">modern search algorithms</a> to optimize models in a black-box fashion, namely defining hyperparameters to optimize and an objective/cost function. Check the <a href="https://optuna.readthedocs.io/en/stable/tutorial/index.html" target="_blank" rel="noopener">tutorials</a> for further information.</p>
<blockquote>
<p>COME TO SHARE AND TEST RECENT COMPUTATIONAL TOOLS FOR NEUROMODELING, ANALYZING YOUR DATA AND MUCH MORE!</p>
</blockquote>
<p>The general purpose of these sessions is in line with the Coding Club on promoting open science in our daily practice. The focus of the CONECT contributions is on advanced (e.g. python, R, etc.) computational packages of interest for the local neuroscientific community, both theoreticians and experimentalists. Methodological questions like how to model specific neuronal systems and fitting experimental data are also in the scope of these sessions, with again a focus on practical tools related to simulation and analysis.</p>
<div class="alert alert-note">
<div>
To CONECT members: Propose topics at <a href="https://amubox.univ-amu.fr/f/1359219652" target="_blank" rel="noopener">Coding Club x CONECT</a>.
And check the Mattermost channel for the schedule.
</div>
</div>
</description>
</item>
<item>
<title>Monthly coding club by CONECT</title>
<link>https://conect-int.github.io/slides/2023-09-22-coding-club-conect/</link>
<pubDate>Fri, 22 Sep 2023 11:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/slides/2023-09-22-coding-club-conect/</guid>
<description>
<section data-noprocess data-shortcode-slide
data-background-image="/media/open-book.jpg"
>
<h1 id="monthly-coding-club-by-conect">Monthly coding club by CONECT</h1>
<h2 id="advanced-computational-tools-for-neuroscience-and-open-science">Advanced computational tools for neuroscience and open science</h2>
<ul>
<li>When? on Friday per month, at 11.00am</li>
<li>Where? salle Vinay at INT</li>
</ul>
<hr>
<h2 id="2023-09-22-at-1100am-optuna">2023-09-22 at 11.00am: Optuna</h2>
<p><a href="https://optuna.org/" target="_blank" rel="noopener">Optuna</a> is a generic-purpose optimizer for hyperparameters. It combines a diversity of <a href="https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/003_efficient_optimization_algorithms.html" target="_blank" rel="noopener">modern search algorithms</a> to optimize models in a black-box fashion, namely defining hyperparameters to optimize and an objective/cost function. Check the <a href="https://optuna.readthedocs.io/en/stable/tutorial/index.html" target="_blank" rel="noopener">tutorials</a> for further information.</p>
</description>
</item>
<item>
<title>2023-09-21 : CONECT seminar by Jason Eshraghian</title>
<link>https://conect-int.github.io/talk/2023-09-21-conect-seminar-by-jason-eshraghian/</link>
<pubDate>Thu, 21 Sep 2023 14:30:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-09-21-conect-seminar-by-jason-eshraghian/</guid>
<description><ul>
<li>When: Thursday <em><strong>2:30pm to 4pm</strong></em></li>
<li>Where: <em>salle Laurent Vinay</em></li>
</ul>
<p>During this CONECT seminar, <a href="https://ncg.ucsc.edu/jason-eshraghian-bio/" target="_blank" rel="noopener">Jason Eshraghian</a> will present his recent work on &ldquo;<strong>Making spiking neural networks do useful things</strong>&rdquo;</p>
<blockquote>
<p>This presentation will dive into how spiking neural networks can be trained to accomplish practical engineering problems. We will provide an overview of the various learning rules that have emerged over the past several decades, along with a few large-scale applications we’ve achieved with spike-based computation. This involves our spike-based language model, SpikeGPT, and our open-source Python library that adopts gradient-based optimization into spike-based models, snnTorch.</p>
</blockquote>
<div class="alert alert-note">
<div>
Jason K. Eshraghian is an Assistant Professor with the Department of Electrical and Computer Engineering, University of California, Santa Cruz and the maintainer of <a href="https://github.com/jeshraghian/snntorch" target="_blank" rel="noopener">snnTorch</a>.
</div>
</div>
<p>In addition, we had a master class in the morning on <a href="https://snntorch.readthedocs.io/en/latest/index.html" target="_blank" rel="noopener">snnTorch</a>, get the notebook (upon request).</p>
</description>
</item>
<item>
<title>2023-09-11: CONECT seminar by Taro Toyoizumi</title>
<link>https://conect-int.github.io/talk/2023-09-11-conect-seminar-by-taro-toyoizumi/</link>
<pubDate>Mon, 11 Sep 2023 14:30:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-09-11-conect-seminar-by-taro-toyoizumi/</guid>
<description><ul>
<li>When: Monday 11th Sept 2023 <em><strong>2.30pm to 3.30pm</strong></em></li>
<li>Where: <em>salle de cours 5, batiment principal Timone- (aile verte)</em></li>
</ul>
<p>During this CONECT seminar co-organized with Institut de Neurosciences des Systèmes (INS), <a href="https://toyoizumilab.riken.jp/" target="_blank" rel="noopener">Taro Toyoizumi</a> will present his recent work on &ldquo;<strong>Modeling the fluctuations and state-dependence of synaptic dynamics</strong>&rdquo;</p>
<blockquote>
<p>Abstract: Adaptive behavior, crucial for thriving in complex environments, is believed to be enabled by activity-dependent synaptic plasticity within neural circuits. In the first part of this talk, I present how synaptic plasticity could be stabilized in the brain. Conventional models of Hebbian plasticity often facilitate connections between coincidentally active neurons and produce pathologically synchronous neural activity. I demonstrate that biologically observed intrinsic synaptic dynamics—activity-independent changes in synapses—can maintain a physiological distribution of synaptic strength and stabilize memory within neural networks. In the second part, I adopt a top-down approach to model synaptic plasticity. Viewing the brain as an efficient information-processing organ, I assume that synaptic weights are updated to transmit information between neurons efficiently. This theory provides insights into the distinct outcomes of synaptic plasticity observed during the up and down states of non-rapid eye movement sleep, thereby shedding light on how memory consolidation may be influenced by the states and spatial scale of slow waves.</p>
</blockquote>
<div class="alert alert-note">
<div>
Taro Toyoizumi leads the Lab for Neural Computation and Adaptation at <a href="https://cbs.riken.jp/en/" target="_blank" rel="noopener">RIKEN Center for Brain Science</a>.
</div>
</div>
</description>
</item>
<item>
<title>2023-07-10 : CONECT seminar by Adrien Fois</title>
<link>https://conect-int.github.io/talk/2023-07-10-conect-seminar-by-adrien-fois/</link>
<pubDate>Mon, 10 Jul 2023 14:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-07-10-conect-seminar-by-adrien-fois/</guid>
<description><p>During this CONECT seminar, <a href="https://www.researchgate.net/profile/Adrien-Fois-3" target="_blank" rel="noopener">Adrien Fois</a> did present his recent work on &ldquo;<strong>Plasticity and Temporal Coding in Spiking Neural Networks Applied to Representation Learning</strong>&rdquo;:</p>
<blockquote>
<p>The brain is a highly efficient computational system, capable of delivering 600 petaFlops while consuming only 20 W of energy, comparable to that of a light bulb. Computation is based on neural impulses, involving information encoding in the form of spikes and learning based on these spikes. According to the dominant paradigm, information is encoded by the number of spikes. However, an alternative paradigm suggests that information is contained in the precise timing of the spikes, offering significant advantages in terms of energy efficiency and information transfer speed.</p>
</blockquote>
<blockquote>
<p>My work aims to extract representations from temporal codes using event-based learning rules that are both spatially and temporally local. In particular, I will present a learning model that learns representations not in synaptic weights, but in transmission delays, which inherently operate in the temporal domain. Learning delays prove to be particularly relevant for processing temporal codes and enable the activation of a key function of spiking neurons: the detection of temporal coincidences.</p>
</blockquote>
<div class="alert alert-note">
<div>
Adrien Fois was a PhD Student at the Institute Lorrain de Recherche en Informatique et Ses Applications and now a Post-doc at INT.
</div>
</div>
</description>
</item>
<item>
<title>2023-06-20: CONECT at the CENTURI summer school</title>
<link>https://conect-int.github.io/talk/2023-06-20-conect-at-the-centuri-summer-school/</link>
<pubDate>Tue, 20 Jun 2023 14:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-06-20-conect-at-the-centuri-summer-school/</guid>
<description><h1 id="title-neural-computation-through-population-dynamics">Title: Neural computation through population dynamics</h1>
<div class="alert alert-note">
<div>
Program in construction - you can already check the program of the <a href="https://centuri-livingsystems.org/centuri-summer-school-2023/" target="_blank" rel="noopener">summer school</a> (June 20 - July 01, 2023) or directly access the <a href="https://centuri-livingsystems.org/wp-content/uploads/2018/02/SUMMER-SCHOOL-program-2023.pdf" target="_blank" rel="noopener">detailed program</a>.
</div>
</div>
<h2 id="question">Question</h2>
<p>Detecting precise spiking motifs in neurobiological data</p>
<h2 id="preliminary-program">Preliminary program</h2>
<ul>
<li>Monday, June 19 – Room 4 at CIELL (Hexagone building – 1st floor)
<ul>
<li>14 – 16h : introduction par Rosa and Pierre</li>
<li>16h-17h : group project presentation (in Hexagone auditorium)</li>
</ul>
</li>
<li>Tuesday, June 20
<ul>
<li>13h-14h15: group lunch at CROUS (booking in the small room)</li>
</ul>
</li>
<li>Tuesday, June 20 to Thursday, June 29
<ul>
<li>Afternoon 14:30 - 17:00: group projects in Room 4 at CIELL</li>
<li>Room 4 at CIELL (Hexagone building- 1st floor)</li>
</ul>
</li>
<li>Wednesday, June 28 and Thursday, June 29
<ul>
<li>Room 4 at CIELL (Hexagone building- 1st floor)</li>
<li>All day: Group projects in Institutes</li>
</ul>
</li>
<li>Friday, June 30 – HEXAGONE AUDITORIUM
<ul>
<li>09h30-12h: presentation of group projects</li>
<li>12h-13h30 : group lunch – buffet in the HEXAGONE Hall</li>
<li>13h30: end of the event</li>
</ul>
</li>
</ul>
<h2 id="challenge">Challenge</h2>
<p>At any given instant, hundreds of billions of cells in our brains are lighting up in a complicated yet highly coordinated manner to give rise to our thoughts, percepts, and movements. A single neuron may be connected to thousands of other cells, sending out and receiving information through electrical impulses called spikes. From an engineering perspective, these spikes form a signal that may be viewed as a series of ones and zeros rapidly unfolding in time. Altogether, these signals reflect the ongoing computations taking place inside the nervous system, and as such, constitute a window into the brain’s inner workings. Recent advances in recording techniques have allowed experimenters to collect data from hundreds to thousands of neurons simultaneously while animals perform simple tasks. Dealing with such high-dimensional data poses important technical challenges that require elaborate methods for data mining and analysis. In this project, students will deal with datasets of increasing complexity and develop a set of analyses to extract meaningful information from the data.</p>
<h2 id="type-of-data">Type of data</h2>
<p>The folowing datasets will be shared by the teaching staff:</p>
<ul>
<li>
<p>publicly available recordings from a reaching task from Hatsopoulos, Joshi, and O&rsquo;Leary (2004) <a href="https://journals.physiology.org/doi/full/10.1152/jn.01245.2003" target="_blank" rel="noopener">doi:10.1152/jn.01245.2003</a></p>
</li>
<li>
<p>publicly available recordings from the dorsomedial frontal cortex of NHPs performing a time-interval reproduction task Meirhaeghe, Sohn, and Jazayeri (2021) <a href="https://www.cell.com/neuron/fulltext/S0896-6273%2821%2900622-X" target="_blank" rel="noopener">doi:10.1016/j.neuron.2021.08.025</a> - see (<a href="https://github.com/jazlab/Meirhaeghe2021%29" target="_blank" rel="noopener">https://github.com/jazlab/Meirhaeghe2021)</a>.</p>
</li>
</ul>
<h2 id="methods">Methods</h2>
<p>Data visualisation, neural decoding, principal component analysis, kinematic and geometric analyses of neural trajectories in high-dimensional space, hypothesis-testing, null distributions and statistics</p>
<h2 id="resources">Resources</h2>
<ul>
<li>
<p><a href="https://github.com/CONECT-INT/2023_CENTURI-SummerSchool" target="_blank" rel="noopener">https://github.com/CONECT-INT/2023_CENTURI-SummerSchool</a></p>
</li>
<li>
<p><a href="https://github.com/SpikeAI/2022-11_brainhack_DetecSpikMotifs" target="_blank" rel="noopener">https://github.com/SpikeAI/2022-11_brainhack_DetecSpikMotifs</a></p>
</li>
</ul>
</description>
</item>
<item>
<title>Computational Neuroscience projet</title>
<link>https://conect-int.github.io/slides/2023-06-19-conect-centuri-summer-school/</link>
<pubDate>Mon, 19 Jun 2023 14:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/slides/2023-06-19-conect-centuri-summer-school/</guid>
<description>
<section data-noprocess data-shortcode-slide
data-background-image="/media/open-book.jpg"
>
<h2 id="neural-computation-through-population-dynamics">Neural computation through population dynamics</h2>
<h5 id="computational-neuroscience-project">Computational Neuroscience project</h5>
<h3 id="centuri-summer-school">CENTURI Summer school</h3>
<p><a href="https://conect-int.github.io/talk/2023-06-20-conect-at-the-centuri-summer-school/" target="_blank" rel="noopener">https://conect-int.github.io/talk/2023-06-20-conect-at-the-centuri-summer-school/</a></p>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<ul>
<li>Press <code>S</code> key to view</li>
<li>this project is part of the CENTURI summer school - and we would like to thank the organizers of the school&hellip;</li>
<li>In this short presentation, we will present the challenges that we want to tackle and which we named&hellip;</li>
</ul>
</aside>
<hr>
<h2 id="who-are-we">Who are we?</h2>
<table>
<tr>
<th><img data-src="https://conect-int.github.io/authors/nicolas-meirhaeghe/avatar.jpg" height="200" /></th>
<th><img data-src="https://conect-int.github.io/authors/laurent-u-perrinet/avatar.png" height="200" /></th>
</tr>
<tr>
<td>Nicolas<BR>Meirhaeghe</td>
<td>Laurent<BR>Perrinet</td>
</tr>
</table>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<p>This project is supervised by NM and myself. We are both at the INT, working at the interface between neurophysiology and computational modelling.</p>
</aside>
<hr>
<h2 id="challenge-brain-decoding">Challenge: brain decoding</h2>
<span class="fragment " >
<img data-src="https://raw.githubusercontent.com/CONECT-INT/2023_CENTURI-SummerSchool/main/datasets/dataset1_reaching-task/decoding.png" height="420" />
</span>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>our brains light up billions of cells, in majority carried by action potentials, or <em>spikes</em>,</li>
<li>neural activity is structured in a way that allows agents to act on the world</li>
<li>we wish to better understand this relationship by using machine learning.</li>
</ul>
<p>In this example, a monkey is seeing a display for which a reaching task is associated. at the same time neural activity (raster plot) is recorded in the premotor area. our goal is to be able to design a computational method to predict the actual behavior. achieving to do this allows to better understand computational principles of the brain</p>
<ul>
<li>application to BCI</li>
</ul>
<p>&ldquo;what I can build, I can understand&rdquo;
(to be more modest, as Feynman said “What I cannot build. I do not understand.” )</p>
</aside>
<hr>
<h2 id="objectives">Objectives</h2>
<ul>
<li>Learn computational methods to interpret and interrogate neural data</li>
<li>Learn to reduce the complexity of high-dimensional neural data</li>
<li>Learn statistical approaches to perform hypothesis-testing on neural data</li>
<li>Learn the principles of decoding analyses to relate neural data to behavioral data</li>
</ul>
<aside class="notes">
<p><strong>2 MINUTES</strong></p>
<p>The objectives in this project are:
&hellip;</p>
</aside>
<hr>
<h2 id="datasets">Datasets</h2>
<ul>
<li>Dataset 1: reaching task (Hatsopoulos et al., J. Neurophysiol., 2004)</li>
</ul>
<span class="fragment " >
<ul>
<li>Dataset 2: time interval task (Meirhaeghe et al., Neuron, 2021)</li>
</ul>
</span>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<p>During the project we will focus on two datasets:</p>
<ul>
<li>&hellip; which is openly available</li>
<li>the second &hellip; which will be provided during the course</li>
</ul>
</aside>
<hr>
<h2 id="dataset-1-reaching-task">Dataset 1: reaching task</h2>
<span class="fragment " >
<p><img data-src="https://raw.githubusercontent.com/CONECT-INT/2023_CENTURI-SummerSchool/main/datasets/dataset1_reaching-task/centerout-task.png" height="200" /><img data-src="https://raw.githubusercontent.com/CONECT-INT/2023_CENTURI-SummerSchool/main/datasets/dataset1_reaching-task/trajectories.png" height="300" /></p>
<p>Hatsopoulos, Joshi, and O&rsquo;Leary (2004) <a href="https://journals.physiology.org/doi/full/10.1152/jn.01245.2003" target="_blank" rel="noopener">doi:10.1152/jn.01245.2003</a></p>
</span>
<span class="fragment " >
<h5 id="goal-decode-intended-arm-movements-from-motor-cortical-activity">Goal: decode intended arm movements from motor cortical activity</h5>
</span>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<p>The first dataset is a classic reaching task. it consists of recordings in primary motor (MI) and dorsal premotor (PMd) cortices in behaving monkeys doing a reaching task, that is, instructed to move a cursor from the center to a target.</p>
</aside>
<hr>
<h2 id="dataset-2-time-interval-task">Dataset 2: time interval task</h2>
<img data-src="https://raw.githubusercontent.com/CONECT-INT/2023_CENTURI-SummerSchool/main/datasets/dataset2_time-interval-task/dataset2_fig1A.png" height="300" />
<span class="fragment " >
<img data-src="https://raw.githubusercontent.com/CONECT-INT/2023_CENTURI-SummerSchool/main/datasets/dataset2_time-interval-task/dataset2_fig2.png" height="300" />
</span>
<p>Meirhaeghe, Sohn, and Jazayeri (2021) <a href="https://www.cell.com/neuron/fulltext/S0896-6273%2821%2900622-X" target="_blank" rel="noopener">doi:10.1016/j.neuron.2021.08.025</a></p>
<span class="fragment " >
<h5 id="goal-relating-neural-dynamics-to-animals-behavioral-performance">Goal: relating neural dynamics to animals’ behavioral performance</h5>
</span>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<p>the second dataset is more challenging and involves :</p>
<ul>
<li>Monkeys measured time intervals drawn from various distributions</li>
<li>Activity in the frontal cortex scaled in time with the mean interval</li>
<li>Temporal scaling allowed time to be encoded predictively relative to the mean</li>
</ul>
</aside>
<hr>
<h2 id="dataset-2-time-interval-task-1">Dataset 2: time interval task</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/malvache2016.png" height="300" />
<p>Malvache, Reichinnek, Vilette, Haimerl &amp; Cossart (2016) <a href="https://www.science.org/doi/10.1126/science.aaf3319" target="_blank" rel="noopener">doi:10.1126/science.aaf3319</a></p>
<span class="fragment " >
<h5 id="goal-use-precise-spike-times-to-improve-decoding">Goal: use precise spike times to improve decoding</h5>
</span>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>
<p>our goal is to improve decoding</p>
</li>
<li>
<p>Internal representation of hippocampal neuronal population spans a time-distance continuum.</p>
</li>
<li>
<p>yet the domain is vast, and there s lot to do in SNNs</p>
</li>
</ul>
</aside>
<!--
---
## Dataset 2: time interval task
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" height="300" />
Haimerl, Angulo-Garcia *et al*, (2019) [doi:10.1073/pnas.1718518116](https://doi.org/10.1073/pnas.1718518116)
##### Goal: use precise spike times to improve decoding
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>
<p>our goal is to improve decoding</p>
</li>
<li>
<p>Internal representation of hippocampal neuronal population spans a time-distance continuum.</p>
</li>
<li>
<p>yet the domain is vast, and there s lot to do in SNNs</p>
</li>
</ul>
</aside> -->
<hr>
<h1 id="questions">Questions?</h1>
<ul>
<li>home page: <a href="https://conect-int.github.io/talk/2022-06-20-conect-at-the-centuri-summer-school/" target="_blank" rel="noopener">https://conect-int.github.io/talk/2022-06-20-conect-at-the-centuri-summer-school/</a></li>
<li>Contact us @ <a href="mailto:nmrghe@gmail.com,laurent.perrinet@univ-amu.fr">nicolas.meirhaeghe@univ-amu.fr, laurent.perrinet@univ-amu.fr</a></li>
<li>GitHub repository: <a href="https://github.com/CONECT-INT/2023_CENTURI-SummerSchool" target="_blank" rel="noopener">https://github.com/CONECT-INT/2023_CENTURI-SummerSchool</a></li>
</ul>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<ul>
<li>we look forward to start working with you on this project !</li>
</ul>
</aside>
<hr>
<h1 id="questions-1">Questions?</h1>
<img data-src="data:image/gif;base64,R0lGODlhpACkAJEAAAAAAP///wAAAAAAACH5BAEAAAIALAAAAACkAKQAAAL/jI+py+0Po5y02ouz3rz7D4biSJbmiabqyrbuC8fyTNd2DOT6zvc+4wsCIMKiMYfYUY5M3bLJBEKJ0GrSOaken1qhtEntbg/KrDjIPfO+0YfaeEWa32VJvXJnx9X6DZqMZZHnFohX2GdAN7QwKPgDKOcYGdFIechYqIiY8Zd4affZUBm26HCniTmJ0RkwSqj6WtoKByl7SqtxC2uKOzs55rkrmtmra9u7SizMxmrcHLoJ7Hv8i3xhnPZcXbSXtqdNHc6avC1LCv79GGwem97TXQsifT6VKq4+7V2LHv8xH2vFnrs1/UCV47dOnjVeXaIVgyZwHz5s/hYOaxhxHUJL/wcnKiOY8NpHeCFLJqCo0eOyjNM2btRnEmU+ls5UsmN4713BliNFliM50yTQmjqFtpNYNCjPnz7DDe2pQKY0V1Eh4nyKMSZUQ9x2wuR4EyxWLWOBZrNJLuymr0TbaGUqyYvXOWpZsv0Y8K3Tpi/pnn1TdqfUrYPhLuVDF5XewIsFE36saGXVyHsbW1ZauDJmynUnR2a82XBm0G3P0DBtr+1AeklvdEDtWTXSzoepuuZKNjXe3ZJrW7391+1Jm7JxjntR2uJAmaDHHo9dLxdv4czQ0gyVvDdsTtP/QU9aibnjrsajc+8IBqB1z2bHyy2f3sPU7tOX00f/fnjv6vvVg/+/n1OA2QmIT3sGgXRefgOuNtuCDRY414G2sYaggynhZ6FvrUUI1oTXZaUUKRQ296AwHl5Im3PmnRiibqJhZxGLz11EIHktXuWfZuHF+BuDB9pno4y/uQSjjQbSyOJXJKKYmXhGXSZkf3E5ZOJD+B2544tS3vhakZoBqeN6+oU1ml8jZPlliWH+V2Wba6bIoUJGNsUfbeCgOeCMcuaXVp0jzuanmnpWNOeULtpJnJd5QiiCd7gFCl+AkGrIZ46kienjmGBi2WOchFYIGahuorkph1HCKR2bbzp5GJUKeqnkmWIWx+WMGZJqJnC2KhrqnyheWiiXe+L63Zu+tupeqez/nbDrqL1aCmiyaq4l64ZPInvZpERiCGu1ZDKqqbSRNkmHiN5O6uqRyob22VGN8qrds+OGqhiNJOA5pLxIzsqZuamuCiC+qi4a5Ipb9ekewd8WPLAYYJb5r4rcWktppvXmxS4H5E6M4Lp3OiyovsHZRnK+zlJ8arqeSgiuuivXyuOWxGbsh3Ivl3xyxy9zqiWq+1Ls8rU8Vyq0XeIKy2y386qaqcQAc8zCzHVuq7NeG9fos8ZK//zqlR+KKum0K0h9KNUzmw1gcB6bt67Tjjoaa6Rrxzdms1jbPeiPiaUdcs5ng1xskkbLLTaT4L4d8OE2680yw0RTqvDfDTued4dW/z6O8XZOP8z3v/MFmzng4f7aN9Za04046qJvXnrbfMF7t8gJb+2vWDsfqq3BhrG+LOO2F014sZz37CrZOPouOOnCt34076P/Dn3yzw8P9q2wX60wtb5TT+vnC3dtOvO9P1r25db7TW+ioHc6Pvf1FQ6z43NTd3v7X0ueZuOIog/0p7XbvSTLxYt/VQsBzsI2PlYdS3uQY1+C/qc44r1ugN+DW80o2LT60Wx7paNa0KZ3P9q5C2n2M5zqJMjBBEbQZwocXAPl17wRFk+ExyMhCE0oHIgJsHoHw50PlWcvU2FKg1N7H/CwVTcjBvFafTkihbpnsgJ+DINCZJoLR6bEA/8KDIFLjB/YGBg3rtWOgZXboveKKDMHljGK6HLhdsyoxjj274hbHGMHe4jDMLaxhDdEIhlFB8c07q6K3xuaxVr2wz5msIbG86IhowXEBd4MkSp00x8HaTUlSpKOlHxeI/E3Q6/pD4yQhN4hKSfHx6ntjjBMIRK9d6rKHYsz+eviKcEXy07Osl8arCP8ZAm/tETucqdJH8o0ebQ1WtEG2NsaAGdHwRa6oJmWLCWwCnhL11CzkNZMZiebuEPN7Y1um3ylvqSZRMBMkH7BS6exvCnIdsUFY+X0IDxZyMt5sg1ay4OS7Gp4F0y+8GlflGEigenErw3tjL9kIz/9aFDk4XH/oG6b6AeniM8tXRONBc3jJKOJTEYSk5DCNGbsQIRQT46UiQ7MZilpSauAgu+R+uzZME+IzREylKUafWkWYSpK8nnUnxwLJDf5iLY5qrSVG6Ro+aqJVH4FK5QdbSo9g0nSg3Yueiu16j6H2kdfZlKpFbXpOXVHUOlRT6ZJfScOdWhB/aX0fFzcJlwXF05dQrOq9nzrWac6TlVeMaLu3Cm7lJkbTsYTYUub6V61iFY7AjRXXDNshjgKzg9qdrB+XeZhvwlIh8o1tFDFzV3J+Vi8uo+pm+2nV1E7VnZWlrRHNe1fZXvTjMrTkeu8ZupOmlPgCHe4xC2ucY+L3OQqd7nMD22uc58L3ehKd7rUZUEBAAA7" height="300" />
</description>
</item>
<item>
<title>2023-06-12: CONECT Workshop on Learning</title>
<link>https://conect-int.github.io/talk/2023-06-12-conect-workshop-on-learning/</link>
<pubDate>Mon, 12 Jun 2023 09:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-06-12-conect-workshop-on-learning/</guid>
<description><h1 id="conect-workshop-active-learning-in-brains-and-machines">CONECT Workshop <em>Active learning in brains and machines</em></h1>
<p>We organized a one-day workshop on Monday, June 12th, at the Institut de Neurosciences de la Timone (INT), in the Gastaut room (9h-17h), campus Timone. The aim of CONECT one-day workshops was to gather computational neuroscientists and experimentalists around an open question in the field, with plenty of room for interaction and discussion.</p>
<p>When? <em>12th of June 2023</em></p>
<p>Where? <a href="https://goo.gl/maps/MLpmsN9cd2N1Uv1L7" target="_blank" rel="noopener"><em>Campus Timone (room Gastaut), Aix-Marseille Université</em>, 27 Boulevard Jean Moulin, 13005 Marseille</a></p>
<p>Organizers: Simon Nougaret, Emmanuel Daucé, Laurent Perrinet (mobile: 0619478120), Matthieu Gilson</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img alt="attendees" srcset="
/talk/2023-06-12-conect-workshop-on-learning/IMG_1617_hu55ebe531ef629f2f9bbb055848de5bda_1416425_905e1d9d1d818029cf8732850e05ba9b.webp 400w,
/talk/2023-06-12-conect-workshop-on-learning/IMG_1617_hu55ebe531ef629f2f9bbb055848de5bda_1416425_7d552a93f2d9086f7a07f05ed8a2a7af.webp 760w,
/talk/2023-06-12-conect-workshop-on-learning/IMG_1617_hu55ebe531ef629f2f9bbb055848de5bda_1416425_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://conect-int.github.io/talk/2023-06-12-conect-workshop-on-learning/IMG_1617_hu55ebe531ef629f2f9bbb055848de5bda_1416425_905e1d9d1d818029cf8732850e05ba9b.webp"
width="570"
height="760"
loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h2 id="topics">Topics</h2>
<p>In biology, a major trait of neural systems is the capability to <em>learn</em>, that is, to adapt its behavior to the environment it interacts with. Recent advances in machine learning and deep learning have, in parallel, contributed to formulate learning in terms of optimizing performance under task-specific domains.</p>
<p>While each field inspires the other, there is still a gap in our understanding of how learning in machines may compare or even relate to learning in biology. The goal of this workshop is to allow people from both computational and experimental sides to understand current research achievements and challenges about <em>active learning in brains and machines</em>.</p>
<h3 id="program">PROGRAM</h3>
<p>9:00 : Welcome &amp; Introduction</p>
<h4 id="session-1--encoding-of-neuronal-representations-chair-emmanuel-daucé">Session 1 : Encoding of neuronal representations (chair: Emmanuel Daucé)</h4>
<p>9:15 : <strong><strong>Alexandre Pitti</strong></strong> (ETIS, CY-U, Cergy Pontoise): &ldquo;Neuro-inspired mechanisms for sensorimotor and syntactic learning in language&rdquo;</p>
<p>9:55 : <strong>Laurie Mifsud &amp; Matthieu Gilson</strong> (INT, Marseille) &ldquo;Statistical learning in bio-inspired neuronal network&rdquo;</p>
<p>10:15 : <strong>Antoine Grimaldi</strong> (INT, Marseille) &ldquo;Learning in networks of spiking neurons with heterogeneous delays&rdquo;</p>
<p>10:35 : coffee break</p>
<h4 id="session-2--learning-action-selection-chair-matthieu-gilson">Session 2 : Learning action selection (chair: Matthieu Gilson)</h4>
<p>11:00 : <strong><strong>Jorge Ramirez Ruiz</strong></strong> (Univ Pompeu Fabra, Barcelona) &ldquo;Path occcupancy maximization principle&rdquo;</p>
<p>11:40 : <strong>Nicolas Meirhaeghe</strong> (INT, Marseille) : &ldquo;Bayesian Computation through Cortical Latent Dynamics&rdquo;</p>
<p>12:00 : <strong>Emmanuel Daucé &amp; Hamza Oueld</strong> (INT, Marseille) : &ldquo;Principles of model-driven active sampling in the brain&rdquo;</p>
<p>12:30 : Meal</p>
<h4 id="session-3--neuronal-basis-of-vocal-representation-chair-laurent-perrinet">Session 3 : Neuronal basis of vocal representation (chair: Laurent Perrinet)</h4>
<p>14:00 : <strong><strong>Thomas Schatz</strong></strong> (LIS, Marseille): &ldquo;Perceptual development, unsupervised representation learning and auditory neuroscience&rdquo;</p>
<p>14:40 : <strong>Charly Lamothe</strong> (LIS/INT, Marseille) &amp; <strong>Etienne Thoret</strong> (PRISM/LIS/ILCB, Marseille): &ldquo;Decoding voice identity from brain activity&rdquo;</p>
<p>15:00 : coffee break</p>
<h4 id="session-4--distribution-and-integration-of-brain-functions-chair-simon-nougaret">Session 4 : Distribution and integration of brain functions (chair: Simon Nougaret)</h4>
<p>15:30 : <strong><strong>Jean-Rémi King</strong></strong> (Meta / CNRS): &ldquo;Language in the brain and algorithms&rdquo;</p>
<p>16:10 : <strong>Etienne Combrisson &amp; Andrea Brovelli</strong> (INT, Marseille) : &ldquo;Cortico-cortical interactions for goal-directed causal learning&rdquo;</p>
<p>16:30 : Round table</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img alt="attendees" srcset="
/talk/2023-06-12-conect-workshop-on-learning/IMG_7807_hua238fd24969325e44a2e5a20c16ddb0f_1676835_4213d5016f5c6256ba9380ce4d3e75b3.webp 400w,
/talk/2023-06-12-conect-workshop-on-learning/IMG_7807_hua238fd24969325e44a2e5a20c16ddb0f_1676835_c1ac7a60351ca73a881e363dbcf38d44.webp 760w,
/talk/2023-06-12-conect-workshop-on-learning/IMG_7807_hua238fd24969325e44a2e5a20c16ddb0f_1676835_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://conect-int.github.io/talk/2023-06-12-conect-workshop-on-learning/IMG_7807_hua238fd24969325e44a2e5a20c16ddb0f_1676835_4213d5016f5c6256ba9380ce4d3e75b3.webp"
width="760"
height="570"
loading="lazy" data-zoomable /></div>
</div></figure>
</p>
<h3 id="abstracts">Abstracts</h3>
<ul>
<li>Andrea Brovelli: &ldquo;Cortico-cortical interactions for goal-directed learning&rdquo;</li>
</ul>
<blockquote>
<p>During my presentation, I will provide a concise overview of two recent studies that explore the significance of cortico-cortical interactions in goal-directed learning and the processing of outcome-related learning computations, specifically prediction errors. In the first study, we examined the interaction between human prefrontal and insular regions during reward and punishment learning. Using intracranial EEG recordings to measure high-gamma activity (HGA) and leveraging advancements in information theory, we discovered a functional distinction in inter-areal interactions between reward and punishment learning. A reward subsystem with redundant interactions was observed between the orbitofrontal and ventromedial prefrontal cortices, where the ventromedial prefrontal cortex played a Granger-causality driving role. Additionally, we identified a punishment subsystem with redundant interactions between the insular and dorsolateral cortices, with the insula acting as the primary driver. Furthermore, we found that the encoding of both reward and punishment prediction errors was mediated by synergistic interactions between these two subsystems. In the second study, we investigated the spatio-temporal characteristics of cortico-cortical interactions that support learning-related variables, such as reward-related signals (Bayesian surprise). Our results revealed the involvement of a distributed network comprising the visual, lateral prefrontal, and orbitofrontal cortex. Preliminary findings also indicated the presence of higher-order synergistic interactions that emerge from the combined activation of these networks.</p>
</blockquote>
<ul>
<li>Emmanuel Daucé &amp; Hamza Oueld : &ldquo;Principles of model-driven active sampling in the brain&rdquo;</li>
</ul>
<blockquote>
<p>Understanding our environment requires not only passively observing sensory samples, but also acting to seek out useful relationships between our actions and their possible outcomes. Inspired by the concept of &ldquo;visual salience&rdquo;, we provide a way to interpret action selection as making an &ldquo;ideal experiment&rdquo;, in a behavioral task where participants estimate the causal influence of a player on the outcome of a volleyball game. We show that the balance between the accuracy and the diversity objectives can lead to specific action selection biases, reflected both in the model and in the data.</p>
</blockquote>
<ul>
<li>Antoine Grimaldi: <a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/" target="_blank" rel="noopener">&ldquo;Learning in networks of spiking neurons with heterogeneous delays&rdquo;</a></li>
</ul>
<blockquote>
<p>The response of a biological neuron depends on the precise timing of afferent spikes. This temporal aspect of the neural code is essential to understand information processing in neurobiology and applies particularly well to the output of neuromorphic hardware such as event-based cameras. However, most artificial neural models do not take advantage of this important temporal dimension of the neural code. Inspired by this neuroscientific observation, we develop a model for the efficient detection of temporal spiking motifs based on a layer of spiking neurons with heterogeneous synaptic delays. The model uses the property that the diversity of synaptic delays on the dendritic tree allows for the synchronization of specific arrangements of synaptic inputs as they reach the basal dendritic tree. We show that this can be formalized as a time-invariant logistic regression that can be trained on labeled data. We demonstrate its application to synthetic naturalistic videos transformed into event streams similar to the output of the retina or to event-based cameras and for which we will characterize the accuracy of the model in detecting visual motion. In particular, we quantify how the accuracy can vary as a function of the overall computational load showing it is still efficient at very low workloads. This end-to-end, event-driven computational building block could improve the performance of future spiking neural network (SNN) algorithms and in particular their implementation in neuromorphic chips.</p>
</blockquote>
<ul>
<li>Charly Lamothe &amp; Etienne Thoret: &ldquo;Decoding voice identity from brain activity&rdquo;</li>
</ul>
<blockquote>
<p>Voice information processing in the brain involves specialized areas called temporal voice areas (TVAs), which respond more strongly to vocalizations from the same species. However, how these areas represent voice information, specifically speaker identity, is not well understood. To investigate this, we used a deep neural network (DNN) to create a compact representation of voice stimuli called the voice latent space (VLS). We then examined the relationship between the VLS and brain activity using various analyses. We discovered that the VLS correlates with cerebral activity measured by fMRI when exposed to thousands of voice stimuli from numerous speakers. The VLS also better captures the representation of speaker identity in the TVAs compared to the primary auditory cortex (A1). Additionally, the VLS enables reconstructions of voice stimuli in the TVAs that maintain important aspects of speaker identity, as confirmed by both machine classifiers and human listeners. These findings suggest that the DNN-derived VLS provides high-level representations of voice identity information in the TVAs.</p>
</blockquote>
<ul>
<li>Jean-Rémi King: &ldquo;Language in the brain and algorithms.&rdquo;</li>
</ul>
<blockquote>
<p>Deep learning has recently made remarkable progress in natural language processing. Yet, the resulting algorithms fall short of the efficiency of the human brain. To bridge this gap, we here explore the similarities and differences between these two systems using large-scale datasets of magneto/electro-encephalography (M/EEG), functional Magnetic Resonance Imaging (fMRI), and intracranial recordings. After investigating where and when deep language algorithms map onto the brain, we show that enhancing these algorithms with long-range forecasts makes them more similar to the brain. Our results further reveal that, unlike current deep language models, the human brain is tuned to generate a hierarchy of long-range predictions, whereby the fronto-parietal cortices forecast more abstract and more distant representations than the temporal cortices. Overall, our studies show how the interface between AI and neuroscience clarifies the computational bases of natural language processing.</p>
</blockquote>
<ul>
<li>Nicolas Meirhaerghe: &ldquo;Neural correlate of prior expectations in timing behavior&rdquo;</li>
</ul>
<blockquote>
<p>A central function of the brain is the capacity to anticipate the timing of future events based on past experience. Picture, for instance, a naive baseball player attempting to intercept an incoming ball. After a few failed trials, the player becomes increasingly close to striking the ball, and ultimately hits a home run. How does information about the past few throws help guide the timing of the player’s action? In this talk, I will present behavioral and electrophysiological data from non-human primates aimed at understanding how temporal expectations are represented in the brain. Specifically, I will address the following two questions: (1) how prior knowledge about the timing of a future event is encoded at the level of single neurons, and induces systematic biases in behavior; (2) what type of neural and behavioral changes occur when temporal expectations change in a dynamic environment.</p>
</blockquote>
<ul>
<li>Laurie Mifsud &amp; Matthieu Gilson: &ldquo;Statistical learning in bio-inspired neuronal network&rdquo;</li>
</ul>
<blockquote>
<p>In biological neuronal networks, information representation and processing like learning by synaptic plasticity rules are not only related to first-order statistics (i.e. mean firing rate) but also second and higher-order statistics in spike trains. This palces the focus on the temporal structure of distributed spiking activity, at several timescales. In parallel, recent models in machine learning like deep temporal convolutional networks have switched from inputs like static images to time series. In both cases, the goal is to extract spatio-temporal patterns of activity and it can be framed in the context of statistical learning. We will start from experimental evidence about spiking activity during a cognitive task, then present recent work combining covariance-based learning and reservoir computing to classify time series. The results highlight the important role for the recurrent connectivity in transforming information representations in biologically inspired architectures. Finally, we will see how to use this supervised learning framework to tune a recurrently connected population to experimental spiking data.</p>
</blockquote>
<ul>
<li>Alex Pitti : &ldquo;Neuro-inspired mechanisms for sensorimotor and syntactic learning in language&rdquo;</li>
</ul>
<blockquote>
<p>I propose that two neural mechanisms are at work for language acquisition. On the one hand, predictive coding and on the other hand, serial order coding. Predictive coding helps connect causes to effects in sensorimotor coordination during voice learning. While serial order codes allow pattern recognition in sentences to extract syntactic rules and hierarchy. The coupling between two neural architectures based on these two mechanisms, resp. the cortico-basal system and the fronto-striatum system can be used for the acquisition and categorization of sound primitives (syllables) and sequences (words). As a surprising extension of this idea, we have found that serial codes produce as well efficient coding and can reach Shannon&rsquo;s limit in terms of information capacity. Langage is a compressive representation of information.</p>
</blockquote>
<ul>
<li>Jorge Ramirez Ruiz: &ldquo;A maximum occupancy principle for brains and behavior&rdquo;</li>
</ul>
<blockquote>
<p>The usual approach to analyze naturalistic behavior is to define its function as some form of reward or utility maximization. However, inferring the reward function for natural agents or designing one for artificial ones is problematic due to the unobservability of internal states and to the appearance of unintended behavior, respectively. Here, we abandon the idea of reward maximization and propose a principle of behavior based on the intrinsic motivation to maximize the occupancy of future action and state paths. We reconceptualize ‘reward’ as means to occupy paths, instead of the goals. We show that goal-directed behavior emerges from this principle by applying it to various discrete and continuous state tasks. In particular, we can apply this principle to a network of recurrently connected neurons, and we show that it is possible to produce highly variable activity while avoiding the saturation of the units. This work provides a proof of concept that goal-directedness is possible in the complete absence of external reward maximization.</p>
</blockquote>
<ul>
<li>Thomas Schatz: &ldquo;Perceptual development, unsupervised representation learning and auditory neuroscience&rdquo;</li>
</ul>
<blockquote>
<p>I will draw from ongoing research projects at the interface between developmental psychology, machine learning, and computational neuroscience, to illustrate how, in my view, perspectives from each of these fields may contribute to the others. More specifically, I will discuss how considerations from developmental psychology and computational neuroscience can inform the design of novel algorithms for the unsupervised learning of speech representations and how the study of these algorithms may, in turn, lead to a deeper understanding of dynamic signal processing in the human brain and of perceptual development in infancy.</p>
</blockquote>
</description>
</item>
<item>
<title>2023-03-28: CONECT thematic day on Spiking Neural Networks</title>
<link>https://conect-int.github.io/talk/2023-03-28-conect-thematic-day-on-spiking-neural-networks/</link>
<pubDate>Tue, 28 Mar 2023 10:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-03-28-conect-thematic-day-on-spiking-neural-networks/</guid>
<description><p>This meet-up was focused on <strong>discussing recent developments in Spiking Neural Networks</strong>, with plenty of time for discussion.</p>
<ul>
<li>We met at INT, Laurent Vinay meeting room.</li>
</ul>
<h2 id="program">program</h2>
<ul>
<li>
<p>10:00</p>
<ul>
<li>Laurent Perrinet</li>
<li>Title: <strong>A short intro on <a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener">Precise Spiking Motifs in Neurobiological and Neuromorphic Data</a></strong></li>
<li>Slides: <a href="https://conect-int.github.io/slides/2023-03-28-conect-seminar-day-on-snns/" target="_blank" rel="noopener">https://conect-int.github.io/slides/2023-03-28-conect-seminar-day-on-snns/</a></li>
</ul>
</li>
<li>
<p>10:30</p>
<ul>
<li><a href="https://homepages.cwi.nl/~sbohte/" target="_blank" rel="noopener">Sander Bohte</a></li>
<li>Title: <strong>Scaling Up Spiking Neural Networks with Online Learning in Gated Spiking Neurons</strong></li>
</ul>
</li>
<li>
<p>11:30</p>
<ul>
<li>Antoine Grimaldi, PhD student (INT)</li>
<li>Title: <strong><a href="https://laurentperrinet.github.io/publication/grimaldi-23-bc/grimaldi-23-bc.pdf" target="_blank" rel="noopener">Learning heterogeneous delays in a layer of spiking neurons for fast motion detection</a></strong></li>
<li>The response of a biological neuron depends on the precise timing of afferent spikes. This temporal aspect of the neuronal code is essential in understanding information processing in neurobiology and applies particularly well to the output of neuromorphic hardware such as event-based cameras. However, most artificial neuronal models do not take advantage of this minute temporal dimension. Inspired by this neuroscientific observation, we develop a model for the efficient detection of temporal spiking motifs based on a layer of spiking neurons with heterogeneous delays which we apply to the computer vision task of motion detection. Indeed, the variety of synaptic delays on the dendritic tree allows synchronizing synaptic inputs as they reach the basal dendritic tree. We show this can be formalized as a time-invariant logistic regression which can be trained using labeled data. We apply this model to solve the specific computer vision problem of motion detection, and demonstrate its application to synthetic naturalistic videos transformed into event streams similar to the output of event-based cameras. In particular, we quantify how the accuracy of the model can vary with the total computational load. This end-to-end event-driven computational brick could help improve the performance of future spiking neural network algorithms and their prospective use in neuromorphic chips.</li>
</ul>
</li>
<li>
<p>12:00 Lunch time (at INT R+4, will be provided only for people registered below)</p>
</li>
<li>
<p>14:00</p>
<ul>
<li>Pr. Benoît Miramond (LEAT, Université Côte d&rsquo;Azur)</li>
<li>Title: <strong>Estimating Energy Efficiency of Spiking Neural Networks on neuromorphic hardware</strong></li>
<li>Spiking Neural Networks are a type of neural networks where neurons communicate using only spikes. They are often presented as a low-power alternative to classical neural networks, but few works have proven these claims to be true. In this work, we present a metric to estimate the energy consumption of SNNs independently of a specific hardware. We then apply this metric on SNNs processing three different data types (static, dynamic and event-based) representative of real-world applications. As a result, all of our SNNs are 6 to 8 times more efficient than their FNN counterparts.</li>
</ul>
</li>
<li>
<p>15:00 break</p>
</li>
<li>
<p>15:30</p>
<ul>
<li>Dr. Andrea Castagnetti (LEAT, Université Côte d&rsquo;Azur)</li>
<li>Title: <a href="https://www.frontiersin.org/articles/10.3389/fnins.2023.1154241/full" target="_blank" rel="noopener"><strong>Trainable quantization for Speedy Spiking Neural Networks</strong></a></li>
<li>Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This presentation focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model.</li>
</ul>
</li>
<li>
<p>16:00</p>
<ul>
<li>Yann Cherdo, PhD student (LEAT, Université Côte d&rsquo;Azur - Renault)</li>
<li>Title: <strong>HTM and SNN for a bio inspired time series forecasting</strong></li>
<li>In the recent years, Spiking Neural Networks have gain much attention from the research community. They can now be trained using the powerful gradient descent and have drifted from the neuroscience to the Machine Learning community. An abundant literature shows that they can perform well on classical Artificial Intelligence tasks such as image or signal classification while consuming less energy than state-of-the-art models like Convolutional Neural Networks. Yet, there is very little work about their performance on unsupervised anomaly detection and time-series prediction. Indeed, the processing of such temporal data requires different encoding and decoding mechanisms and rises questions about their capacity to model a dynamical signal with long term temporal dependencies. In this presentation, we propose a comparison between Sparse Recurrent Spiking Neural Network and Hierarchical Temporal Memories (HTM). We show that both models perform well on temporal tasks and open a door for further studies of embedded applications for Spiking Neural Networks.</li>
</ul>
</li>
<li>
<p>17:00 outro</p>
</li>
</ul>
</description>
</item>
<item>
<title>CONECT thematic day on Spiking Neural Networks</title>
<link>https://conect-int.github.io/slides/2023-03-28-conect-seminar-day-on-snns/</link>
<pubDate>Tue, 28 Mar 2023 10:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/slides/2023-03-28-conect-seminar-day-on-snns/</guid>
<description>
<section data-noprocess data-shortcode-slide
data-background-image="/media/open-book.jpg"
>
<h1 id="spiking-neural-networks">Spiking Neural Networks</h1>
<h2 id="conect-thematic-day">CONECT thematic day</h2>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<ul>
<li>Press <code>S</code> key to view</li>
<li>Hi, I am LP and in the name of CONECT, we look forward to discuss on SNNs</li>
<li>as part of the CONECT&hellip;</li>
<li>In this short presentation, we will present the challenges that we want to tackle and which we named&hellip;</li>
</ul>
</aside>
<hr>
<img data-src="https://conect-int.github.io/slides/conect/CONECT-logo.png" height="200" />
<p><a href="https://conect-int.github.io" target="_blank" rel="noopener">CONECT: Computational Neuroscience Center @ INT</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<p>-so, what is CONECT?</p>
<ul>
<li>
<p>CONECT is Computational Neuroscience Center @ INT, bringing together a core of theoretician</p>
</li>
<li>
<p>aims at making bridges in neuroscience</p>
</li>
<li>
<p>and across the community</p>
</li>
</ul>
</aside>
<hr>
<h2 id="challenge-visual-latencies">Challenge: Visual latencies</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency-estimate.jpg" height="420" />
<p><a href="https://doi.org/10.1126/science.1058249" target="_blank" rel="noopener">Thorpe &amp; Fabre-Thorpe, 2001</a></p>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<ul>
<li>
<p>In particular in our group, we are interested in dynamics of neural processing</p>
</li>
<li>
<p>The visual system is very efficient in generating a decision from the retinal image to the different stages of the visual pathways, here for a macaque monkey, a reaction of finger muscles in about 300 milliseconds.</p>
</li>
<li>
<p>the process of categorizing an object takes 10 layers</p>
</li>
</ul>
</aside>
<hr>
<h2 id="challenge-visual-latencies-1">Challenge: Visual latencies</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/visual-latency.jpg" height="420" />
<p>Review on <a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener">Precise Spiking Motifs</a></p>
<aside class="notes">
<p><strong>1 MINUTE</strong></p>
<ul>
<li>
<p>the latencies are of similar in the human brain but merely scaled due to the brain size</p>
</li>
<li>
<p>as a consequence, it is thought that this efficiency is achieved by spikes that is, brief all-or-none events which are passed in the very large network which forms the brain from assemblies of neurons to others.</p>
</li>
</ul>
</aside>
<hr>
<h2 id="key-spiking-neural-networks">Key: Spiking Neural Networks</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/replicating_MainenSejnowski1995.png" height="420" />
<p><a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_2_MainenSejnowski1995.ipynb" target="_blank" rel="noopener">Mainen Sejnowski, 1995</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>reproduucibility</li>
</ul>
</aside>
<hr>
<h2 id="key-spiking-neural-networks-1">Key: Spiking Neural Networks</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Diesmann_et_al_1999.png" height="420" />
<p><a href="https://github.com/SpikeAI/2022_polychronies-review/blob/main/src/Figure_3_Diesmann_et_al_1999.py" target="_blank" rel="noopener">Diesmann et al. 1999</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>&ldquo;This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review</li>
</ul>
</aside>
<hr>
<h2 id="hypothesis-spiking-motifs">Hypothesis: Spiking motifs</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/haimerl2019.jpg" height="420" />
<p>Review on <a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener">Precise Spiking Motifs</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>This hypothesis is reviewed with respect to our knowledge of the neurobiology, for instance in the hippocampus of rodents. We also review</li>
</ul>
</aside>
<hr>
<h2 id="hypothesis-spiking-motifs-1">Hypothesis: Spiking motifs</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/Ikegaya2004zse0150424620001.jpeg" height="420" />
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>numerous and extensive work on mechanisms which may allow the neural system to learn to actually use that precise spiking motifs by attuning the delay between pairs of neurons.</li>
</ul>
</aside>
<hr>
<h2 id="hypothesis-spiking-motifs-2">Hypothesis: Spiking motifs</h2>
<img data-src="https://github.com/SpikeAI/2022_polychronies-review/raw/main/figures/izhikevich.png" height="420" />
<p>Review on <a href="https://laurentperrinet.github.io/publication/grimaldi-22-polychronies/" target="_blank" rel="noopener">Precise Spiking Motifs</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li>
<p>Izhikevich polychronization</p>
</li>
<li>
<p>yet the domain is vast, and there s lot to do in SNNs</p>
</li>
</ul>
</aside>
<hr>
<h2 id="todays-program">Today&rsquo;s program&hellip;</h2>
<table>
<tr>
<th><img data-src="https://www.cwi.nl/intranet/faces/1152.jpg" height="175" /></th>
<th><img data-src="https://laurentperrinet.github.io/author/antoine-grimaldi/avatar_hu85406bb2d5f7db2dce1cab01b4e48063_27520_270x270_fill_q75_lanczos_center.jpg" height="175" /></th>
<th><img data-src="https://3ia.univ-cotedazur.eu/medias/photo/benoit-miramond_1621434732805-png?ID_FICHE=1087703" height="175" /></th>
<th><img data-src="https://phd-seminars-sam.inria.fr/files/2019/04/photo_Andrea_Castagnetti-235x300.jpg" height="175" /></th>
<th><img data-src="https://media.licdn.com/dms/image/C4D03AQG1wCHtwVhGYg/profile-displayphoto-shrink_400_400/0/1582485965416?e=1685577600&v=beta&t=oUiVlWlAQLG9rnz0nu0r-TdZ2LftDopThqB51nx4vQc" height="175" /></th>
</tr>
<tr>
<td>Sander<BR>Bohte</td>
<td>Antoine<BR>Grimaldi</td>
<td>Benoit<BR>Miramond</td>
<td>Andrea<BR>Castagnetti</td>
<td>Yann<BR>Cherdo</td>
</tr>
</table>
<p><a href="https://conect-int.github.io/talk/2023-03-28-conect-thematic-day-on-spiking-neural-networks/" target="_blank" rel="noopener">Program &amp; more</a></p>
<aside class="notes">
<p><strong>2 MINUTE</strong></p>
<ul>
<li></li>
</ul>
</aside>
</description>
</item>
<item>
<title>2023-03-03 : INT seminar by Andrea Alamia</title>
<link>https://conect-int.github.io/talk/2023-03-03-int-seminar-by-andrea-alamia/</link>
<pubDate>Fri, 03 Mar 2023 14:30:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-03-03-int-seminar-by-andrea-alamia/</guid>
<description><p>During this INT seminar, <a href="https://artipago.github.io" target="_blank" rel="noopener">Andrea Alamia</a> will present his recent work on &ldquo;<strong>Interpreting oscillations as travelling waves: the role of alpha-band oscillations in cognition</strong>&rdquo;:</p>
<blockquote>
<p>In this talk, I will present three studies that characterize oscillatory travelling waves in the framework of Predictive Coding. In the first study, I’ll introduce a simple model of the visual cortex based on predictive coding mechanisms, in which physiological communication delays between levels generate alpha-band rhythms. Interestingly, these oscillations propagate as traveling waves across levels, both forward (during visual stimulation) and backward (during rest). Remarkably, experimental EEG data matched the predictions of our model. The second study refines the results of the first one, demonstrating that the direction of propagation of alpha-band waves is task dependent. Specifically, forward waves (from occipital to frontal regions) prevail during visual processing, whereas backward waves (from frontal to occipital areas) occur predominantly without visual stimulation. The last study explores the effect of a powerful psychedelics drug, N,N, Dimethyltryptamine (DMT), on alpha-band oscillations, considering a model proposed in the literature based on Predictive Coding. Despite participants being in the eye-closed condition, DMT elicits a spatio-temporal pattern of cortical activation (i.e. travelling waves) similar to that produced by visual stimulation, in line with the predictions of the proposed model. Lastly, I’ll show some preliminary results about the role of oscillatory traveling waves in schizophrenic patients, interpreting the results in the light of Predictive Coding.</p>
</blockquote>
<div class="alert alert-note">
<div>
Andrea Alamia is a CNRS researcher at the Brain and Cognition Research Center (CerCo) in Toulouse (France).
</div>
</div>
</description>
</item>
<item>
<title>2023-01-05 : CONECT seminar by Guillaume Dumas</title>
<link>https://conect-int.github.io/talk/2023-01-05-conect-seminar-by-guillaume-dumas/</link>
<pubDate>Thu, 05 Jan 2023 16:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2023-01-05-conect-seminar-by-guillaume-dumas/</guid>
<description><p>During this CONECT seminar, <a href="https://www.extrospection.eu" target="_blank" rel="noopener">Guillaume Dumas</a> will present his recent work on &ldquo;<strong>Multilevel Development of Cognitive Abilities in an Artificial Neural Network</strong>&rdquo;:</p>
<blockquote>
<p>Several neuronal mechanisms have been proposed to account for the formation of cognitive abilities through postnatal interactions with the physical and socio-cultural environment. Here, we introduce a three-level computational model of information processing and acquisition of cognitive abilities. We propose minimal architectural requirements to build these levels and how the parameters affect their performance and relationships. The first sensorimotor level handles local nonconscious processing, here during a visual classification task. The second level or cognitive level globally integrates the information from multiple local processors via long-ranged connections and synthesizes it in a global, but still nonconscious manner. The third and cognitively highest level handles the information globally and consciously. It is based on the Global Neuronal Workspace (GNW) theory and is referred to as conscious level. We use trace and delay conditioning tasks to, respectively, challenge the second and third levels. Results first highlight the necessity of epigenesis through selection and stabilization of synapses at both local and global scales to allow the network to solve the first two tasks. At the global scale, dopamine appears necessary to properly provide credit assignment despite the temporal delay between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustained representation within the GNW in the absence of sensory input. Finally, while balanced spontaneous intrinsic activity facilitates epigenesis at both local and global scales, the balanced excitatory-inhibitory ratio increases.</p>
</blockquote>
<p>More info: <a href="https://www.pnas.org/doi/10.1073/pnas.2201304119" target="_blank" rel="noopener">https://www.pnas.org/doi/10.1073/pnas.2201304119</a></p>
<p>Keywords: computational biology, dynamical systems, medical machine learning (ML), neuroscience, AI ethics</p>
<div class="alert alert-note">
<div>
Guillaume Dumas is an Associate Professor of Computational Psychiatry in the Faculty of Medicine at the Université de Montréal, and the Principle Investigator of the Precision Psychiatry and Social Physiology laboratory at the CHU Sainte-Justine Research Center. He holds the IVADO professorship for “AI in Mental Health”, and the FRQS J1 in “AI and Digital Health”.
</div>
</div>
</description>
</item>
<item>
<title>2022-11-28: CONECT at the INT brainhack: Automatic detection of spiking motifs in neurobiological data</title>
<link>https://conect-int.github.io/talk/2022-11-28-conect-at-the-int-brainhack-automatic-detection-of-spiking-motifs-in-neurobiological-data/</link>
<pubDate>Mon, 28 Nov 2022 09:00:00 +0000</pubDate>
<guid>https://conect-int.github.io/talk/2022-11-28-conect-at-the-int-brainhack-automatic-detection-of-spiking-motifs-in-neurobiological-data/</guid>
<description><blockquote>
<p>TL;DR This project aims to develop a method for the automated detection of repeating spiking motifs, possibly noisy, in ongoing activity. Results are available on the shared repo: <a href="https://github.com/SpikeAI/2022-11_brainhack_DetecSpikMotifs" target="_blank" rel="noopener">https://github.com/SpikeAI/2022-11_brainhack_DetecSpikMotifs</a></p>
</blockquote>
<ul>
<li><a href="https://mattermost.brainhack.org/brainhack/channels/bhg22-marseille-detecspikmotifs" target="_blank" rel="noopener">Mattermost channel</a></li>
</ul>
<h2 id="description">Description</h2>
<h3 id="leaders">Leaders</h3>
<ul>
<li><a href="https://matthieugilson.eu" target="_blank" rel="noopener">Matthieu Gilson</a> - <a href="https://github.com/MatthieuGilson" target="_blank" rel="noopener">https://github.com/MatthieuGilson</a></li>
<li><a href="https://laurentperrinet.github.io" target="_blank" rel="noopener">Laurent Perrinet</a> - <a href="https://github.com/LaurentPerrinet" target="_blank" rel="noopener">https://github.com/LaurentPerrinet</a></li>
</ul>
<h3 id="collaborators">Collaborators</h3>
<ul>
<li>Hugo Ladret</li>
<li>George Abitbol</li>
</ul>
<h3 id="brainhack-global-2022-event">Brainhack Global 2022 Event</h3>
<ul>
<li><a href="https://brainhack-marseille.github.io" target="_blank" rel="noopener">Brainhack Marseille</a></li>
<li>supported by the <a href="https://laurentperrinet.github.io/grant/polychronies/" target="_blank" rel="noopener">Polychronies</a> grant</li>