Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (2024)

Mahdi Namazifar

Principal Scientist at Amazon AGI

  • Report this post

We still have 1 open internship position in Amazon AGI. If someone you know is a great PhD student with a deep knowledge and expertise in LLMs and ML and looking for a summer internship, I would appreciate if you send them my way :-)

99

12 Comments

Like Comment

Vamsi Agnihotram

MS CSE @SJSU spec: Data science || Ex - TCS Digital || Active search: Summer 2024 internship, Fall 2024 co-op. || Campus ambassador @Adobe

53m

  • Report this comment

Hello Respected Mahdi Namazifar,I'm a Student of 𝐒𝐚𝐧 𝐉𝐨𝐬𝐞 𝐒𝐭𝐚𝐭𝐞 𝐔𝐧𝐢𝐯𝐞𝐫𝐬𝐢𝐭𝐲, 𝐬𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐳𝐢𝐧𝐠 𝐢𝐧 𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞, and 𝐞𝐱- 𝐃𝐚𝐭𝐚 𝐬𝐜𝐢𝐞𝐧𝐭𝐢𝐬𝐭 𝐢𝐧 𝐓𝐂𝐒 𝐃𝐢𝐠𝐢𝐭𝐚𝐥. My expertise spans NLP, machine learnining, neural networks, DA, and ETL pipelines using PySpark.My projects include a 𝐬𝐞𝐧𝐭𝐢𝐦𝐞𝐧𝐭 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐭𝐨𝐨𝐥 𝐮𝐬𝐢𝐧𝐠 𝐁𝐄𝐑𝐓 𝐋𝐋𝐌for hotel ratings and a neural network-powered attendance system for which i wrote & 𝐩𝐮𝐛𝐥𝐢𝐬𝐡𝐞𝐝 𝐚 𝐫𝐞𝐬𝐞𝐫𝐚𝐜𝐡 𝐩𝐚𝐩𝐞𝐫 𝐢𝐧 𝐄𝐮𝐫𝐨𝐩𝐞𝐚𝐧 𝐣𝐨𝐮𝐫𝐧𝐚𝐥. Both demonstrate my ability to leverage deep learning for practical solutions.Recently, at the 𝐒𝐭𝐚𝐧𝐟𝐨𝐫𝐝 𝐓𝐫𝐞𝐞𝐡𝐚𝐜𝐤𝐬 𝐇𝐚𝐜𝐤𝐚𝐭𝐡𝐨𝐧, I developed 'VitaVisuals', which uses DL and NLP to simplify medical reports, enhanced by a chatbot integrated with 𝐀𝐦𝐚𝐳𝐨𝐧 𝐒𝐚𝐠𝐞𝐌𝐚𝐤𝐞𝐫 𝐋𝐋𝐌. Amazon my top choice among potential employers, Eager to bring my skills to AGI, and this role perfectly align with my academic background, professional experience, and future career aspirations. Attached is my resume. Great thanks for considering my application. I inquisitive for any further discussion.Best regards,Vamsi Agnihotram

  • Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (5)

No more previous content

  • Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (6)

No more next content

Like Reply

1Reaction

Amazon Science

4h

  • Report this comment

Great opportunity!

Like Reply

1Reaction

Vamsi Agnihotram

MS CSE @SJSU spec: Data science || Ex - TCS Digital || Active search: Summer 2024 internship, Fall 2024 co-op. || Campus ambassador @Adobe

48m

  • Report this comment

Noting that the links cannot be accessed directly from the image of my resume provided and i can't attach a file in the comment section, for further details on my projects and professional background, please visit my profiles:1. My Github: https://github.com/Vamsi-Agnihotram-182. My Linkedin: https://www.linkedin.com/in/vamsi-agnihotram-7b19131a8/

Like Reply

1Reaction 2Reactions

Venkata Naga Sai Kumar Bysani

Lead Data Analyst | BCBS Of South Carolina | Featured on Times Square, Fox, NBC | MS in Data Science at UConn | Proven track record in driving actionable insights and predictive analytics | 350+TopMate Bookings

45m

Cfvr

Like Reply

1Reaction

Zhenyu W.

Ph.D. Candidate at the University of Texas at Dallas|seeking full-time Research Scientist/MLE roles in Speech/Multimodal ML/Gen AI

1h

  • Report this comment

I am really interested!!! how can I send my resume your way.

Like Reply

1Reaction

Yashashvini Rachamallu

Graduate Teaching Assistant @ Michigan State University | Ex- Intern @ Intel | Master of Science in Computer Science

39m

  • Report this comment

Hello Mahdi, I have sent my resume through a message for your review. Looking forward to the opportunity to join your team! Thank you!

Like Reply

1Reaction

Disha V.

51m

  • Report this comment

Interested.

Like Reply

1Reaction

Madhura J.

Tech Product Manager | UCF CS Grad | The Wharton School - PM&S 2022 | Ex- Symantec, Veritas | GenAI & ML

1h

  • Report this comment

Jyoti Kini

Like Reply

1Reaction 2Reactions

Pulkit Kapur

Product Leader @ Amazon | Generative AI | Autonomous Systems

1h

  • Report this comment

awesome team and great impact!

Like Reply

1Reaction

See more comments

To view or add a comment, sign in

More Relevant Posts

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    Learning from human feedback is critical in training LLMs. RLHF is focused on using binary human feedback. However human feedback does not have to be limited to such binary feedback (preference). In our most recent paper we show learning from human feedback that is in natural language is highly efficient and effective. In this work that we call Critique and Revise (CnR) we show that with 1000 critique examples a 40b parameter LLM could be trained to improve the responses from even ChatGPT!https://lnkd.in/ggpKghYN

    2311.14543.pdf arxiv.org

    203

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    If you're interested in topics related to controllability of LLMs, make sure to check this workshop out!

    19

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    Our KILM paper was accepted at ACL 2023! In this collaboration with Yan XU, Devamanyu Hazarika, Di Jin, Aishwarya Padmakumar, Yang Liu, and Dilek Hakkani-Tur we show how atomic pieces of knowledge could be injected into a pre-trained large language model. Paper: https://lnkd.in/gRS9up3p Code: https://lnkd.in/gApv3zdz Amazon Science

    KILM: Knowledge Injection into Encoder-Decoder Language Models arxiv.org

    152

    10 Comments

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    At #Alexa AI, with Devamanyu Hazarika and Dilek Hakkani-Tur we recently published a new paper that studies the role of bias terms in dot-product attention (the one used in Transformers). In this work we show that the bias term for the "key" linear transformation is redundant. Moreover, we show that the bias term for the "value" linear transformation has a more prominent role compared to the "query" linear transformation. #amazonsciencehttps://lnkd.in/gTDrTGe6

    Role of Bias Terms in Dot-Product Attention arxiv.org

    75

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    At #Alexa AI we recently published two of our papers that are accepted at EMNLP2022. The first work studies prefix tuning from the lens of kernels and proposes an efficient adaptation approach for transformers. The second work studies the role of language instructions in action learning in human robot interactions. I'm grateful for such amazing collaborators! #amazonscience https://lnkd.in/gNGw4ynxhttps://lnkd.in/gyZ3uH9p

    Inducer-tuning: Connecting prefix-tuning and adapter-tuning amazon.science

    51

    2 Comments

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    We recently published two of our papers that are accepted at #NAACL22. In the first work we propose a novel approach to capture the contents of a long document in a graph and later use that graph in response generation. In the second work we propose a more efficient attention adaptation approach based on kernels theory. I'm immensely thankful for being a part of such an amazing team led by Dilek Hakkani-Tur at #Amazon #Alexa AI! Also I'm grateful for the opportunity of working with each and every one of the co-authors of these two works! #AmazonScience1) https://lnkd.in/eicBa5ns2) https://lnkd.in/egM9T4Xq

    Enhanced knowledge selection for grounded dialogues via document semantic graphs amazon.science

    83

    8 Comments

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    Our work on zero-shot control of Natural Language Generation (NLG) was accepted at AAAI 2022. Extended preprint: https://lnkd.in/gRc-EK4In this work we show that 1) attention could be biased in zero-shot without catastrophic forgetting and 2) NLG could be controlled in zero-shot with attention biasing and context augmentation. It's been great learning experience to work with Devamanyu Hazarika and Dilek Hakkani-Tur on this paper as part of the big #AlexaAI family! #AmazonScience

    Zero-Shot Controlled Generation with Encoder-Decoder Transformers

    66

    2 Comments

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    In our most recent work we show how a trained Natural Language Generation (NLG) model could be manually manipulated for zero-shot control! We also report some unexpected behaviors from Transformers that could be leveraged for efficient training of NLG models. With Devamanyu Hazarika and Dilek Hakkani-Turhttps://lnkd.in/gRc-EK4

    Zero-Shot Controlled Generation with Encoder-Decoder Transformers

    57

    Like Comment

    To view or add a comment, sign in

  • Mahdi Namazifar

    Principal Scientist at Amazon AGI

    • Report this post

    In our most recent work with @JohnMalik LI Erran Li Gokhan Tur Dilek Hakkani-Tur we show how Warped Language Models (WLM) could be used for correcting audio transcriptions. We show that even human transcription could be improved by up to 10% using WLMs. https://lnkd.in/gjsYT2K

    Correcting Automated and Manual Speech Transcription Errors using Warped Language Models

    57

    Like Comment

    To view or add a comment, sign in

Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (50)

Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (51)

1,451 followers

  • 15 Posts

View Profile

Follow

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Mahdi Namazifar on LinkedIn: We still have 1 open internship position in Amazon AGI. If someone you… | 12 comments (2024)
Top Articles
Latest Posts
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 6010

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.