machine quotidienne

Icon

AUM is perfect iOS music hub, now with Ableton Link and MIDI updates

Speaking of tools to glue together your gear and serve as the heartbeat of your studio – AUM. This iOS super-tool can serve as an essential hub for combining apps and hardware in any combination – and now it’s even more savvy with Ableton Link and MIDI.

You’d be forgiven for thinking AUM was just some sort of fancy mixer for the iPad. But it’s more like a studio for combining software with software, software with hardware, and hardware with hardware. So it might be a way to combine stuff that’s on your iOS device, or a convenient tool for mobile recording, or a way to let your iPad sit in a studio of other gear and make them play together, or a combination of all those things.

It does this by letting you do whatever you like with inputs and outputs, iOS plug-ins (Audio Unit extensions), audio between apps (Audiobus and Inter-App Audio), and multichannel audio and MIDI interfaces. It’s a host, a virtual patch bay (for both MIDI and audio), and a recording/playback device. And it’s a tool to center other tools. There’s also Ableton Link and MIDI clock support.

It’s worth bringing up AUM right now, because a minor point update – 1.3 – brings some major new features that really make this invaluable.

  • Ableton Link 3 support means you can start/stop transport.
  • You get “MIDI strips” for hosting useful MIDI-only Audio unit extensions.
  • You can import channels between sessions, and duplicate channel strips.
  • And you get tons of new MIDI mappings: program changes, tap tempo, loading presets, and even loading whole sessions can now be done via MIDI. I imagine that could see this used in some pretty major stage shows.

Jakob Haq has shown some useful ways of approaching the app, including MIDI mapping control:

Lots more tutorials and resources on the official site:

http://kymatica.com/apps/aum

The full feature list:

High quality audio up to 32-bit 96kHz
Clean and intuitive user interface with crisp vector graphics
Extremely compact and optimized code, very small app size
Unlimited* number of channels
Unlimited* number of effect slots
Inserts and sends are configurable pre/post-fader
Internal busses for mixing or effect sends
Supports multi-channel audio interfaces
Supports Audio Unit extensions, Inter-App Audio and Audiobus
Audiobus state saving
Highly accurate transport clock
Metronome with selectable output and optional pre-roll
Sends host sync to Audio Unit plugins and IAA apps
Send MIDI clock to external hardware
Play in time with Ableton Link
FilePlayer with sync and looping, access to all AudioShare files
Records straight into AudioShare storage space
Record synchronized beat-perfect loops
Built-in nodes for stereo processing, filtering and dynamics
Latency compensation makes everything align at the outputs
Separate Inter-App Audio / Audiobus output ports
Built-in MIDI keyboard
Fully MIDI controllable
MIDI Matrix for routing MIDI anywhere

The post AUM is perfect iOS music hub, now with Ableton Link and MIDI updates appeared first on CDM Create Digital Music.

Pioneer Squid is a monster standalone sequencer for your gear

Forget for a second that Pioneer is the CDJ and DJM company. Their latest TORAIZ goes a radical new direction – making what might be the biggest mainstream hardware sequencer since the MPC and Octatrack.

But a deep sequencer with MIDI and CV, for 599€ (awaiting US pricing details) – that sounds like a blockbuster.

The rise of gear for making sound has left a fairly significant hole in the market. You’ve got tons of drum machines, tons of synths, tons of grooveboxes, and then a whole black hole of semi-modular and fully-modular instruments.

But what about making, you know – a song? There aren’t so many choices for actually pulling together rhythms and melodies on all those toys. You’ve got a mishmash of internal sequencing features and devices capable of multiple tracks. But there are limited options beyond that – used Akai MPCs, the Elektron Octatrack, and Arturia BeatStep Pro being most common. The Arturia piece is cheap and cheery – and shows up astride an amazing number of fancy Eurorack rigs, prized for its simplicity. But having just dusted mine off, I find its sequencing really limited.

So here’s the surprise: the company that promises a really deep sequencer, one with elaborate rhythmic features that happily get you off the grid and bending time if you want, is … Pioneer.

The SQUID is certainly in a funny position. On one hand, it’s a natural for real gearheads and synth nerds. On the other, it’s a Pioneer product, so you can bet marketing and DJ press alike will try to say this is about “DJs getting into production” or … something. (No! DJs! Stop while you still can have a social life and, like, money in your bank account! You’ll become broke antisocial hermits like the rest of us!)

But – who cares who this is for? What it does appears to do … is a hell of a lot. And while it might actually have too many features (that will be I think the main element of any test), what’s surprising is that it isn’t a me-too sequencer. Despite the pads and step structure, Pioneer have made an effort to let musicians get off the grid and bend and warp time – so maybe drum machines can have soul again.

First, the predictable bit – it is a pad-based step sequencer, yes:

16 multicolored LED rubber pads with velocity sensitivity
Step record patterns
Live / real-time recording
Scale mode
Per-step automation recording (at least it seems that way – “parameter locks” or p-locks as known to users of other hardware)
Interpolation – this lets you set a beginning, middle, and end on steps and let the machine transition between them, a bit like creating automated envelopes
Harmonizer with up to six chords assigned to buttons
Chord mode with 18 built-in chord sets (I’m curious how customizable this is, as I’d rather the machine not make harmonies for me)
Transpose phrases on the fly
Up to five MIDI CCs on external devices
Randomizer (which covers everything, even CCs)
Pattern Set – this is interesting; it lets you lock in a combination of patterns into an arrangement, a bit like you can do with scenes in Ableton Live

And you can run sequences in different directions (bounce, reverse, whatever), as expected.

Multiple loops. Trigger probability – yeah, Pioneer are ready to take on Elektron here.

Already appealing and powerful, but it’s the real-time manipulation features that go in a new direction.

Speed modulation: look out, locked-bpm techno, because the SQUID can modulate speeds via six waveform shapes (triangle, sawtooth – please tell me there’s a random/S&H mode, too)

Groove bend: yes, there’s Swing, but there’s also “Groove Bend” which lets you use a slider to change timing. (I really hope there’s a way to optionally impact pitch, too, CDJ-style.)

Instant double-, half- speed triggers, too.

You can also shift the Scale and Arpeggiator knobs in real time, meaning… yeah, you can go super free jazz with this if you want.

There’s even an automatic mode that saves your jams even when you don’t hit record. (Ableton Live recently introduced this feature, joining a number of DAWs that have had it over the years.)

And yeah, it works with USB, MIDI, 2 sets of CV/gate, clock and DIN sync. It’s ready for your hardware from the 80s until now.

There’s even software for managing sequence patterns, projects, and MIDI clips – so you can save your work librarian style for live performances, and finish off tracks on the computer with patterns you made on the hardware.

Specs: 64 steps, 8 notes per step, 64 patterns, 128 projects.

I mean – we are sure this is a Pioneer product, right? Did someone get into our brains and make what we want?

I have a lot of questions. Step resolution seems fixed at 32nd notes, without mention of tuplets or other rhythms. I don’t see a listing for ppq resolution (the timing resolution of the sequencer). Performance reliability is something to test. Pioneer talks polyrhythms but I have some questions there.

But – wow. Yes. Let’s test this. Pioneer have so far given us some strange and mostly expensive “producer” devices lately, but this is different. This looks like it has the first shot of being the Pioneer gear every producer wants to buy – not just the Pioneer gear you use when you show up at the club. I can’t wait to get my hands on this so we can share with you what it does and how it might (or might not) fit your needs.

Obligatory promo video. Uh… someone stole Native Instruments’ typography and sci-fi light effects. But no matter – Pioneer made this device before NI did. (Okay, I’m buying the next round of beers in Kreuzberg after that comment, sorry, but it had to be said.)

The competition? It’s boutique, for sure, but the Synthstrom Deluge is the major rival:

It’s more compact than the Pioneer. And this really comes down to whether you want a 4×4 grid with a lot of dedicated triggers, or a whole bunch of pads and the Synthstrom’s nested editing capabilities. What’s really, really nice about the Deluge is, it has an internal synth engine and even sample playback. And ironically, that makes the Deluge better suited than Pioneer’s offering to taking a live project into a DJ booth – because you don’t have to reserve an entire table full of gear just to make sounds. That said, I think making a product dedicated to sequencing does free up the designers to focus on that workflow.

There should be room for both in the market; the workflow is very different, even apart from Synthstrom’s internal sound engine.

I feel bad I haven’t given the Deluge more time on CDM, so – now, no more excuses, I’ll get both these units in for a proper test.

Squarp Pyramid is also at the top of the pile as far as dedicated sequencers:

All product details:

https://www.pioneerdj.com/en-us/product/production/toraiz-squid/black/overview/

I’m a child of the 80s, but every time Pioneer writes that this is “the heartbeat of your studio,” I think of old Chevrolet “heartbeat of America” ads. Is that just me? Okay, it’s just me.

The post Pioneer Squid is a monster standalone sequencer for your gear appeared first on CDM Create Digital Music.

Conference Ground Rules

Prof Lauren Griffith – Keynote 1, 2019

It is exactly one month until day one of our 5th annual conference, this year titled ‘Martial Arts, Culture and Politics’ and held at Chapman University, Orange, California. Our keynote on day one will be Professor Lauren Griffith (pictured above), author of two important studies of capoeira and culture. Day two’s keynote will be Dr Benjamin Judkins (pictured below), prolific martial arts studies scholar, editor, and author of a hugely influential study of wing chun.
draft conference schedule is available here, and a more formal/finalised version will be available very soon.
With the conference looming large on the horizon, we just wanted to set out some ground-rules for everyone attending the conference, combined with some instructions for presenters.
Note also that, as the conference is approaching, the discount deals we secured from our recommended hotels  are coming to an end, so if you have not yet booked accommodation, you really should.

Dr Ben Judkins – Keynote 2, 2019
Ground Rules – for Everyone
To make the conference a success, experience shows that we need to set a few ground rules, and stick to them. Our key rules and guidelines for everyone – presenters and audience alike – are the following:

Rule #1: Stay on Time

  • Please try to get into the right rooms at the right times. We have a tight schedule, and there will often be several sessions running parallel at the same time. These need to start and stop at the right time.
  • To do your bit to keep things to time, please ensure that your own presentation does not overrun the agreed limit (which is 20 minutes, max., for standard presentations). Each panel has a chair, who will politely try to keep you to time – with the aid of bells and whistles, if need be.

Rule #2: Be Respectful

  • This applies to all things. Be respectful in keeping to time and thereby enabling other people’s time. Be respectful of academic and social protocols and normal polite conventions.
  • When you are presenting or asking a question, remember that your time and your voice is not more important than other people’s time and other people’s voices. Similarly, in the rooms, in the corridors, during the meals, in the pubs, in the streets, in the halls, and at all times, please be respectful of other people’s dignity, rights and expectations.
  • There must be no harassment or prejudice of any kind, whether sexual, racial, religious, class, nationalistic, macho, male, female, or anything else.

Rule #3: Be Hospitable

  • Intellectual hospitality is vital and vitalising in any academic context. So you must be hospitable to other people’s ideas, approaches, opinions, and voices. Being open to new ideas, new approaches, and being ready for meeting difference, diversity, eclecticism and even dissensus should not take anyone by surprise here. We are, after all, working across the intersections of multiple academic disciplines and discourses, seeking to immerse ourselves in and advance our knowledge and understanding of myriad aspects of martial arts, even if only for these two days.

Guidelines for Presenters

  • Panels. Panels consist of 2-3 presentations, each of which can be no more than 20 minutes.
  • Chairs. Each panel has a chair. The chair is responsible for keeping the panel to time.
  • Timing. Presenters are expected to finish within 20 minutes. The chair will alert presenters when they have 5 minutes left, 1 minute left, and no time left. Presenters must stop when they have no time left. You should time your talk in advance and keep checking a countdown timer.
  • Discussion. After the 2 or 3 twenty minute presentations, panel chairs should organise a discussion Q&A session. Chairs should try to ensure that anyone who wants to ask a question has the opportunity, if possible. Sessions should finish at the designated time.
  • Computers. Each lecture and seminar room has a networked computer connected to a data projector. There are facilities for connecting USB memory sticks, discs, laptops and macs.
  • Printing. We do not have automatic access to printers. Please print before you arrive.
  • Precautions. It is a good idea to save your presentation in more than one file format (e.g., PPT and PDF), and on more than one device (e.g., USB memory stick and disc), just in case of technical glitches.
  • Preparation. You should load and test your presentation in the presentation room before your session begins. All presentation rooms will be unlocked from early in the morning and will remain unlocked between presentations. Everyone should work to ensure there are no delays caused by trying to load a presentation during the panel itself.    

Hopefully these rules and guidelines are helpful to you. If you have any questions, feel free to email either or both of the conference organisers, Paul Bowman and/or Andrea Molle.
See you soon!

Founder of music tech forum has died; outpourings of support for Mike McGrath

One of the largest forums for music tech nerd-kind this week reports the loss of its founder: Muff Wiggler’s creator, Mike McGrath, has died. The Internet responds.

I want to first say, my heart goes out to all of you who have lost a friend, a family member, a personal connection, or even a far-off but meaningful Internet connection.

Muff Wiggler, the forum, has for more than a decade been the single most influential online community for people interested in modular synthesis, as well as a range of DIY topics – it’s a common go-to for how-to documentation on electronics, among other topics. It has also hosted widely trafficked official forums for a number of brands, including the likes of Expert Sleepers, Hexinverter, Metasonix, and Snazzy FX. It’s been the object of love, of hate – but always has played a central role in conversations about music making technology and the voltage and circuits pulsing underneath.

And it’s worth saying that the whole project really began with one person, Mike – known by many exclusively online, but host to a community of strangers who often grew close. Like a lot of the blogs and forums that support the music tech community, Muff Wiggler and its creator have even become synonymous. I know personally how demanding that can be.

It wouldn’t be any exaggeration to say that part of the explosive growth of Eurorack and modular synthesis is because of Mike’s creation of the forum – one that inspired rabid consumers at the same time as it collected knowledge of how to engineer the modules.

Photo above, at top by I Dream of Wires, who interviewed Mike in their work on the evolution of the modern modular synthesis fandom.

The Muff Wiggler platform grew into other projects – a store, live events (like a collaboration with TRASH AUDIO in Portland, Oregon), and others, which helped people meet the man behind the forum in person, some of them flying from literally the other side of the world to do so.

About that name – it comes from a handle Mike chose that combined the names of two popular Electro-Harmonix effect pedals, Big Muff and The Wiggler.

For their part, a message from Muff Wiggler’s team promises they’ll keep the site going in Mike’s absence. Kent writes on a admin post: “The moderator and admin staff are going to take the needed time to get things in order and ensure the smoothest of possible transitions. It’ll be rough for a bit.”

In the meantime, there is an outpouring of sadness and gratefulness from people who knew Mike personally and those who knew him in the virtual arena – from the community of people for whom he created a home where none had existed.

The main thread on Muff Wiggler

Synthtopia obituary

Modular giant Ken MacBeth writes: “Mike McGrath……….I hope that you find your peace now……..RIP.”

Mike himself wrote in 2017 about his passion for the project in a Facebook Group, saying it began from wanting to learn about modular synthesis, amidst options that were “intimidating” – to create instead a place where you could make friends. And he talked about the importance of music and his machines in his personal life – in good times and in dark times.

Matrixsynth has a heartfelt obituary which traces some history – even before the forum, including the first blog posts by Muff Wiggler (back when it was just Mike’s alias):

Mike created the de facto modular synth forum on the internet … and he did it in a way that put members first. He created a platform for makers and users of synths to come together and engage directly with each other.

And yeah, I think all of us who have run enterprises on the Internet for music feel this one in our gut. Again quoting the mighty Matrixsynth:

I just can’t believe he is gone. As the host of this site, I feel like I lost a fellow compatriot. Someone I had history with through the ups and downs. Running a site can be a challenge, and just knowing he was out there doing his thing helped. I am going to miss him and the lost experiences we would all have had with him around.

RIP Mike McGrath of Muff Wiggler

Finally, long-time collaborator Surachai writes, “Mike is the connective tissue that bound almost every modular user when information was scarce.”

He goes on to say:

I invited whoever was interested in welcoming the overlord of the synthesizer community to a BBQ at my place and we were met with one of the kindest and smartest people to grace our lives….

His contributions to and maintenance of information cannot be overstated. His reach and ability to connect people cannot be overstated.

Mike McGrath / Muffwiggler

You’ll also find some videos online.

http://muffwiggler.com/

https://www.muffwiggler.com/forum/index.php

The post Founder of music tech forum has died; outpourings of support for Mike McGrath appeared first on CDM Create Digital Music.

Now ‘AI’ takes on writing death metal, country music hits, more

Machine learning is synthesizing death metal. It might make your death metal radio DJ nervous – but it could also mean music software works with timbre and time in new ways. That news – plus some comical abuse of neural networks for writing genre-specific lyrics in genres like country – next.

Okay, first, whether this makes you urgently want to hear machine learning death metal or it drives you into a rage, either way you’ll want the death metal stream. And yes, it’s a totally live stream – you know, generative style. Tune in, bot out:

Okay, first it’s important to say, the whole point of this is, you need data sets to train on. That is, machines aren’t composing music, so much as creatively regurgitating existing samples based on fairly clever predictive mathematical models. In the case of the death metal example, this is SampleRNN – a recurrent neural network that uses sample material, repurposed from its original intended application working with speak. (Check the original project, though it’s been forked for the results here.)

This is a big, big point, actually – if this sounds a lot like existing music, it’s partly because it is actually sampling that content. The particular death metal example is nice in that the creators have published an academic article. But they’re open about saying they actually intend “overfitting” – that is, little bits of samples are actually playing back. Machines aren’t learning to generate this content from scratch; they’re actually piecing together those samples in interesting ways.

That’s relevant on two levels. One, because once you understand that’s what’s happening, you’ll recognize that machines aren’t magically replacing humans. (This works well for death metal partly because to non connoisseurs of the genre, the way angry guitar riffs and undecipherable shouting are plugged together already sounds quite random.)

But two, the fact that sample content is being re-stitched in time like this means this could suggest a very different kind of future sampler. Instead of playing the same 3-second audio on repeat or loop, for instance, you might pour hours or days of singing bowls into your sampler and then adjust dials that recreated those sounds in more organic ways. It might make for new instruments and production software.

Here’s what the creators say:

Thus, we want the out-put to overfit short timescale patterns (timbres, instruments, singers, percussion) and underfit long timescale patterns(rhythms, riffs, sections, transitions, compositions) so that it sounds like a recording of the original musicians playing new musical compositions in their style.

Sure enough, you can go check their code:

https://github.com/ZVK/sampleRNN_ICLR2017

Or read the full article:

Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

The reason I’m belaboring this is simple. Big corporations like Spotify might use this sort of research to develop, well, crappy mediocre channels of background music that make vaguely coherent workout soundtracks or faux Brian Eno or something that sounded like Erik Satie got caught in an opium den and re-composed his piano repertoire in a half daze. And that would, well, sort of suck.

Alternatively, though, you could make something like a sampler or DAW more human and less conventionally predictable. You know, instead of applying a sample slice to a pad and then having the same snippet repeat every eighth note. (Guilty as charged, your honor.)

It should also be understood that, perversely, this may all be raising the value of music rather than lowering it. Given the amount of recorded music currently available, and given that it can already often be licensed or played for mere cents, the machine learning re-generation of these same genres actually requires more machine computation and more human intervention – because of the amount of human work required to even select datasets and set parameters and choose results.

DADABOTS, for their part, have made an entire channel of this stuff. The funny thing is, even when they’re training on The Beatles, what you get sounds like … well, some of the sort of experimental sound you might expect on your low-power college radio station. You know, in a good way – weird, digital drones, of exactly the sort we enjoy. I think there’s a layperson impression that these processes will magically improve. That may misunderstand the nature of the mathematics involved – on the contrary, it may be that these sorts of predictive models always produce these sorts of aesthetic results. (The same team use Markov Chains to generate track names for their Bandcamp label. Markov Chains work as well as they did a century ago; they didn’t just start working better.)

I enjoy listening to The Beatles as though an alien civilization has had to digitally reconstruct their oeuvre from some fallout-shrouded, nuclear-singed remains of the number-one hits box set post apocalypse. (“Help! I need somebody! Help! The human race is dead!” You know, like that.)

Deep the Beatles! by DADABOTS

As it moves to black metal and death metal, their Bandcamp labels progresses in surreal coherence:

Megaturing by DADABOTS

This album gets especially interesting, as you get weird rhythmic patterns in the samples. And there’s nothing saying this couldn’t in turn inspire new human efforts. (I once met Stewart Copeland, who talked about how surreal it was hearing human drummers learn to play the rhythms, unplugged, that he could only achieve with The Police using delay pedals.)

I’m really digging this one:

Megaturing by DADABOTS

So, digital sample RNN processes mostly generate angry and angular experimental sounds – in a good way. That’s certainly true now, and could be true in the future.

What’s up in other genres?

SONGULARITY is making a pop album. They’re focusing on lyrics (and a very funny faux generated Coachella poster). In this case, though, the work is constrained to text – far easier to produce convincingly than sound. Even a Markov Chain can give you interesting or amusing results; with machine learning applied character-by-character to text, what you get is a hilarious sort of futuristic Mad Libs. (It’s also clear humans are cherry-picking the best results, so these are really humans working with the algorithms much as you might use chance operations in music or poetry.)

Whether this says anything about the future of machines, though, the dadaist results are actually funny parody.

And that gives us results like You Can’t Take My Door:

Barbed whiskey good and whiskey straight.

These projects work because lyrics are already slightly surreal and nonsensical. Machines chart directly into the uncanny valley instead of away from it, creating the element of surprise and exaggerated un-realness that is fundamental to why we laugh at a lot of humor in the first place.

This also produced this Morrissey “Bored With This Desire To Get Ripped” – thanks to the ingenious idea of training the dataset not just with Morrissey lyrics, but also Amazon customer reviews of the P90X home workout DVD system. (Like I said – human genius wins, every time.)

Or there’s Dylan mixed with negative Yelp reviews from Manhattan:

And maybe in this limited sense, the machines are telling us something about how we learn. Part of the poetic flow is about drawing on all our wetware neural connections between everything we’ve heard before – as in the half-awake state of creative vibrations. That is, we follow our own predictive logic without doing the usual censoring that keeps our language rational. Thinking this way, it’s not that we would use machine learning to replace the lyricist. Rather, just as with chance operations in the past, we can use this surreal nonsense to free ourselves from the constraints that normal behavior require.

We shouldn’t underestimate, though, human intervention in using these lyrics. The neural nets are good at stringing together short bits of words, but the normal act of composition – deciding the larger scale structure, choosing funnier bits over weaker ones, recognizing patterns – remain human.

Recurrent neural networks probably won’t be playing Coachella any time soon, but if you need a band name, they’re your go-to. More funny text mangling from the Botnik crew.

My guess is, once the hype dies down, these particular approaches will wind up joining the pantheon of drunken walks and Markov Chains and fractals and other psuedo-random or generative algorithmic techniques. I sincerely hope that we don’t wait for that to happen, but use the hype to seize the opportunity to better educate ourselves about the math underneath (or collaborate with mathematicians), and see these more hardware-intensive processes in the context of some of these older ideas.

If you want to know why there’s so much hype and popular interest, though, the human brain may itself hold the answer. We are all of us hard-wired to delight in patterns, which means arguably there’s nothing more human than being endlessly entertained by what these algorithms produce.

But you know, I’m a marathon runner in my sorry way.

The post Now ‘AI’ takes on writing death metal, country music hits, more appeared first on CDM Create Digital Music.

Série Web « Bienvenue chez… » | Capsule de Pierre Thibault + Diffusion de la deuxième capsule!

Vous avez manqué la première capsule de notre série Web « Bienvenue chez… Pierre Thibault » où il nous accueille dans son « igloo »? La capsule se trouve plus bas!

Réalisés en collaboration avec JURA et Schlüter-Systems (sans oublier Urbania à titre de partenaire média), les six épisodes sont dévoilés, à raison d’un par semaine, sur la page Facebook et la chaîne YouTube de Kollectif!

D’ailleurs, nous avons demandé à chaque architecte de nous dessiner un croquis de sa maison – basé sur celui plus haut, êtes-vous en mesure de deviner qui sera notre hôte dimanche le 21 avril à 7h00? Si vous avez vu la capsule de Pierre, vous le savez déjà!

À noter que vous pouvez également nous suivre via notre compte Instagram et Twitter pour entrer dans les coulisses de nos tournages.

C’est donc un rendez-vous?!

Pour [re]visionner notre première saison, cliquez ici!

Équipe :

  • Marc-André Carignan (Kollectif) / Animation et Recherche
  • Émilie Delorme (CClab) / Réalisation et montage
  • Jessica Rivière-Gomez (CClab) / Directrice de production
  • Martin Houle (Kollectif) / Producteur exécutif

 

Cet article Série Web « Bienvenue chez… » | Capsule de Pierre Thibault + Diffusion de la deuxième capsule! est apparu en premier sur Kollectif.

Exposition à l’Université Laval + Publication Web du guide « Penser l’école de demain »

Annonce :

« L’exposition se tient à École d’architecture de l’Université Laval du 16 au 28 avril. Elle présente les maquettes et les travaux du Lab-École. L’exposition se transportera ensuite à la Maison de l’architecture du Québec – MAQ du 29 avril au 16 juin prochain.

La nouvelle publication du Lab-École « Penser l’école de demain » pourra être consultée sur place [NDLR: voir le lien plus bas pour la version Web]. Cet outil expose les résultats de recherche, d’analyse et de consultation menées par le Lab-École en illustrant des données probantes, les meilleures pratiques et les propositions architecturales visant à favoriser l’innovation. Ce sera un document de référence pour les prochains concours d’architecture.

Le Lab-École est un catalyseur d’initiatives innovantes en matière d’environnement physique, de mode de vie sain et actif et d’alimentation à l’école. Conformément à l’esprit « laboratoire », son fonctionnement est ouvert et flexible, axé sur l’exploration et l’expérimentation. Le Lab-École mène ses interventions en étroite collaboration avec chaque équipe-école et sa communauté. Le Lab-École couvre trois champs d’activités : l’environnement physique, le mode de vie sain et actif et l’alimentation. Chaque champ d’activité est piloté par un fondateur qui s’est entouré d’un comité d’experts, tous bénévoles, chargés d’alimenter, de valider et de proposer des concepts afin de créer le meilleur environnement possible pour chaque milieu. Ces experts proviennent d’horizons variés : des gens issus du milieu de l’éducation (personnel enseignant, de soutien et de direction, commissions scolaires, fonctionnaires du ministère de l’Éducation); des chercheurs et chercheuses universitaires; des partenaires du milieu des affaires et de l’économie sociale; des parents, et bien sûr, des élèves. »

Pour consulter la version Web de la publication « Penser l’école de demain »…


Pour visiter le site internet du Lab-École…

 

Cet article Exposition à l’Université Laval + Publication Web du guide « Penser l’école de demain » est apparu en premier sur Kollectif.

Invitation à l’expo des finissant(e)s « L’Annuel de Design »

Annonce :

« Vernissage le 1er mai de 18h à 21h
Inscription obligatoire, réservé votre place ici : Billet
Centre de design de l’UQAM (1440, rue Sanguinet, Montréal)

Exposition du 2 au 8 mai
Entrée libre de 12h à 18h

Organisateur: Finissants en design de l’UQAM

L’incontournable Annuel de Design des finissants de L’UQAM sera de retour encore une fois cette année du 1er au 8 mai.

L’événement

Le 1er mai 2019 marquera le vernissage et le lancement de la cinquième édition de l’Annuel de Design. Cette soirée conviviale permet de mettre en lumière tout le talent des étudiants en plus de donner l’occasion aux professionnels de l’industrie d’échanger avec la relève. L’activité phare de la planète design montréalaise se poursuivra jusqu’au 8 mai afin de permettre au public de venir aussi apprécier l’exposition durant toute la semaine suivant la soirée du vernissage. Plus qu’une exposition, cet événement a su gagner en notoriété dans les dernières années et ce, notamment en raison de la possibilité de réseautage qui s’ajoute à la qualité des projets qui y sont exposés. L’Annuel est donc devenu un incontournable et accueille environ 3000 visiteurs lors du vernissage, mais aussi 5000 autres durant la semaine d’exposition. Les travaux de plus de 190 designers y seront présentés.

Les programmes représentés

  • Baccalauréat en Design de l’environnement
  • Baccalauréat en Design graphique
  • Baccalauréat en Gestion et design de la mode
  • DESS en Architecture moderne et patrimoine
  • DESS en Design d’équipements de transport
  • DESS en Design d’événements
  • Maîtrise en Design de l’environnement »

Pour se procurer un billet pour le vernissage…

Pour visiter la page Facebook de l’Annuel de Design…


Pour visiter le site internet de l’École de design de l’UQAM…

Cet article Invitation à l’expo des finissant(e)s « L’Annuel de Design » est apparu en premier sur Kollectif.

Ron Rayside, Guillaume Fafard et Alan DeSousa récompensés pour leur engagement architectural

Annonce :

« 11/04/2019

L’Ordre a récompensé, le 5 avril dernier, trois personnes ayant contribué, chacune à leur manière, à la qualité du cadre bâti au Québec. La remise de ces distinctions faisait suite à un appel de candidatures lancé en octobre 2018. 

Ron Rayside, lauréat du prix Engagement social 

L’architecte Ron Rayside a reçu le tout premier prix Engagement social de l’histoire de l’Ordre, un prix remis en collaboration avec Architecture sans Frontières Québec. M. Rayside se consacre depuis près de 40 ans à l’émergence d’une société plus équitable et plus saine. Il a orienté sa pratique professionnelle afin de contribuer au mieux-être de la population montréalaise. Il a été et continue d’être impliqué dans de nombreux conseils d’administration, comités et coalitions qui se consacrent à la santé, à la lutte contre l’itinérance et au développement du quartier Centre-Sud, l’un des quartiers les plus défavorisés au Canada où il a choisi d’établir sa pratique.

Sa pratique et celle de sa firme sont axées sur le développement durable d’un point de vue social. Il fait preuve d’une grande sensibilité envers les retombées locales, sociales et environnementales de ses projets, dont plusieurs sont destinés aux organismes communautaires et à l’habitation sociale.

Guillaume Fafard, lauréat du prix Relève en architecture

Le prix Relève en architecture, aussi remis pour la première fois, a été décerné à l’architecte Guillaume Fafard. M. Fafard a fondé sa propre firme presqu’au même moment où il est devenu membre de l’Ordre en 2013. Sa pratique se concentre sur les immeubles de petits gabarits en milieu urbain à Québec. Il propose ainsi aux familles de revenir dans les quartiers urbains en leur offrant des habitations abordables qui conviennent au mode de vie actuel. Plusieurs de ses immeubles se sont d’ailleurs vus récompensés pour leur qualité de conception et de construction au cours des dernières années.

Au-delà de sa pratique de concepteur, Guillaume Fafard est très engagé envers la relève. En plus d’accueillir régulièrement des étudiants à des visites de chantier et de participer à diverses activités destinées aux étudiants, il offre une bourse aux finissants en architecture pour les aider à lancer leur carrière.

Enfin, il s’implique concrètement dans son milieu en participant au CCU de son arrondissement et a même contribué à l’offre d’un parcours architectural à vélo l’été dernier, une belle façon de démocratiser l’architecture.

Alan DeSousa, lauréat du prix Ambassadeur de la qualité en architecture

Le premier prix Ambassadeur de la qualité a été décerné au maire de Saint-Laurent, Alan DeSousa. M. DeSousa a fait preuve d’audace et de conviction à l’égard de la qualité de l’architecture et de son apport à la qualité de vie des citoyens. Dès le début des années 2000, il fut l’un des artisans du plan d’action pour le développement économique de la Ville de Montréal, qui misait entre autres sur le design pour faire rayonner la créativité des Québécois. Il fut également l’un des artisans de la politique de développement durable de la Ville de Montréal.

Son intérêt à l’égard de la qualité de l’architecture s’incarne également dans l’intérêt qu’il porte aux concours. Plusieurs ont été menés dans son arrondissement depuis 2006, certains projets ayant même été récompensés par des prix d’excellence de l’OAQ mais aussi par la médaille du Gouverneur général du Canada.

Alan DeSousa a, par ailleurs, non seulement été le premier élu à appuyer le projet de Politique québécoise de l’architecture, mais il a également usé de son leadership pour convaincre d’autres élus de joindre le mouvement. Ses efforts ont visiblement porté fruits, le Ministère de la Culture et des Communications du Québec ayant officiellement lancé les travaux d’élaboration de la future Stratégie québécoise de l’architecture le 5 avril dernier. »


Pour visiter le site internet de l’Ordre des architectes du Québec…

Cet article Ron Rayside, Guillaume Fafard et Alan DeSousa récompensés pour leur engagement architectural est apparu en premier sur Kollectif.

Concours étudiant sur l’architecture scolaire comme espace de création, d’innovation et de critique

Communiqué de presse :

« Le laboratoire d’étude de l’architecture potentielle (LEAP) est heureux d’annoncer le lancement d’un concours étudiant tenu dans le cadre d’un projet de recherche-création subventionné par le Conseil de recherche en sciences humaines du Canada (CRSH). Ce projet est dirigé par les professeurs Anne Cormier, Jean-Pierre Chupin et Georges Adamczyk de l’École d’architecture de l’Université de Montréal.

Des équipes composées d’un minimum de 3 étudiant.e.s inscrit.e.s ou admis.e.s à la maîtrise dans une université québécoise sont invité.e.s à réfléchir à l’architecture scolaire comme espace de création, d’innovation et de critique. Ces équipes (1) devront soumettre un dossier de candidature au plus tard le 17 mai 2019.

Six équipes seront choisies pour participer au concours et prendront part à un workshop portant sur l’architecture scolaire qui se tiendra le samedi 1er juin 2019 à la Faculté de l’aménagement de l’Université de Montréal. Ce workshop, auquel sont invités des experts des domaines de l’architecture et de l’éducation, offrira aux étudiants une formation visant à soutenir leur réflexion. Le workshop débutera, en matinée, par les conférences publiques de Mark Dudek et de Adam Wood, deux spécialistes de l’architecture scolaire de renommée internationale. Mark Dudek est architecte et pratique à Londres, il est l’auteur de nombreux ouvrages. Adam Wood, également du Royaume-Uni, est chercheur en sciences sociales, il est spécialisé en architecture scolaire.

En septembre 2019, un jury composé d’acteurs des domaines de l’architecture et de l’éducation évaluera les propositions des concurrents pour choisir un lauréat. Les six projets seront exposés et publiés sur le site Internet du Catalogue des Concours Canadiens (https://www.ccc.umontreal.ca).

Chacune des six équipes participantes recevra 1 500$. De plus, le prix de 3 000$ sera remis à l’équipe lauréate.

Pour de plus amples informations, visitez le site du LEAP à l’adresse suivante: https://leap-architecture.org/ ou contactez concours.leap@gmail.com.

Le LEAP est un groupe de recherche interuniversitaire réunissant des chercheurs de l’Université de Montréal, de l’Université Concordia, de l’Université McGill et de l’UQAM.

(1) Au moins l’un.e de ces étudiant.e.s doit être inscrit.e ou admis.e à la maîtrise en architecture de l’Université de Montréal, de l’Université McGill ou de l’Université Laval. De plus tous les étudiant.e.s devront avoir été inscrits dans un programme universitaire au cours de l’hiver 2019. »

Pour consulter le communiqué de presse original…

Pour consulter l’affiche du concours…

Pour consulter l’affiche des conférences reliées au concours…


Pour visiter le site internet du Laboratoire d’étude de l’architecture potentielle…

Cet article Concours étudiant sur l’architecture scolaire comme espace de création, d’innovation et de critique est apparu en premier sur Kollectif.

RSS akimbo

  • An error has occurred, which probably means the feed is down. Try again later.

RSS College Art Association

RSS inside higher ed architecture

RSS inside higher ed: outside architecture